Quite a while ago, Chrome moved from a structure based on “Current Session“, “Current Tabs“, “Last Session” and “Last Tabs” into “Session_#################” and “Tabs_#################” stored in a “Sessions” folder (and similar migrations for other state and configuration files).
The numbers in the “Session_*” and “Tabs_*” files are time stamps of those sessions, for instance one needs to figure out what the “13310808970819630” in “Session_13310808970819630” and “Session_13310808970819630” means.
Lot’s of web-pages with tips and tricks around the old structures are still around, often surfacing high in Google Search results.
I was interested in a particular trick to export Google Chrome browsing history and had a hard time figuring out the easiest solution.
Yes I know there are extensions like [Wayback/Archive] Session Buddy – Chrome Web Store (at the time of writing the last update was fall 2021) and [Wayback/Archive] Tabs Backup & Restore – Chrome Web Store (at the time of writing last update fall 2012).
Both of them are not consistent in saving automagically so upon Chrome crashes you either miss the most recent tabs or older backups are not available.
Upon crashing, Chrome sometimes does not correctly restore Tabs and Sessions file either so that’s why I wrote this post, as the History file is the only thing left to try semi-manually restore tabs.
The Tabs_* and Session_* data files are binary, start with SNSS followed by a binary version number (still 1), are undocumented and not easy to export though this might come in useful (as the internal format seems to be unchanged): [Wayback/Archive] cclgroupltd/ccl-ssns: Automatically exported from code.google.com/p/ccl-ssns
A set of Python modules for performing off-line parsing of Chrome session files (“Current Session”, “Last Session”, “Current Tabs”, “Last Tabs”).
Back to exporting the Chrome history
Dissecting the History data is a lot easier than the Tabs and Session files as the History file is stored in SQLite format.
There are even tools that can read the History SQLite database and export it for you. You can also do it yourself, which requires SQLite knowledge. That’s why I wrote my blog post Some SQLite things I recently learned a while ago.
On Windows, the by far easiest solution is [Wayback/Archive] ChromeHistoryView – View the browsing history of Chrome Web browser:
ChromeHistoryView is a small utility that reads the history data file of Google Chrome Web browser, and displays the list of all visited Web pages in the last days. For each visited Web page, the following information is displayed: URL, Title, Visit Date/Time, Number of visits, number of times that the user typed this address (Typed Count), Referrer, and Visit ID.
You can select one or more history items, and them export them into html/xml/csv/text file, or copy the information to the clipboard and paste it into Excel.
You can also use this tool to read the history of other Chromium-based Web browsers, like Microsoft Edge, Opera, Vivaldi, and others…
Note that despite versalite, it can run very slow, especially on systems with a rich history. The reason is that it loads the full history file upfront, which can result in a lot of history entries. In my case the count until some 20k records was linear then slowed down until some 200k then ChromeHistoryView seemed to hang, but I let it run anyway (as it seemed single-threaded but not message-pumping which Windows often interprets as “not responding”), then after minutes passed by suddenly became alive and showed 450k+ records over the course of slightly more than a year).
This lead me to seeing if querying with SQLite would be easier, but before that we need to understand one more thing: the way that timestamps are stored in the History database.
Tabs/Session/History timestamps on the console
A nice thing is that timestamps are stored the same way in the History file as they are in the Tabs and Session files. Which means we’re back at figuring out what the “13310808970819630” in “Session_13310808970819630” and “Session_13310808970819630” means.
The timestamp has a resolution in microseconds and contrary what you might think (with Google mainly having a nx background) the zero timestamp value is based on Windows timestamps.
Before explaining this further, let’s show some console examples on various operating systems to convert it to a human readable UTC timestamp:
- BSD/MacOS:
# date -u -j -r `expr 13310808970819630 / 1000000 - 11644473600`
Fri Oct 21 06:56:10 UTC 2022
- Linux:
# date --utc --date @`expr 13310808970819630 / 1000000 - 11644473600`
Fri Oct 21 06:56:10 UTC 2022
- Windows:
C:\temp> PowerShell "(([System.DateTimeOffset]::FromUnixTimeSeconds(13310808970819630 / 1000000 - 11644473600)).DateTime).ToString('s')"
2022-10-21T06:56:11
In case you wonder where the 1000000 and 11644473600 come from:
1000000 is the number of microseconds per seconds (so Chrome uses microsecond time resolution)
11644473600 is the number of seconds between the Epoch (timestamp 0) of Windows epoch (1 January 1601 00:00:00 UTC) and Unix epoch (1 January 1970 00:00:00 UTC), see further below for links.
I think this might be related to Google protobuf as that is used in a lot of Google products to serialise data.
The History file structure
On some of my systems the Chrome History file is huge and seems to have an extremely long history. So querying it needs limitation of the records returned and for that knowing about the structure helps.
This is the schema with syntax:
C:\temp> sqlite3 C:\temp\History .schema
CREATE TABLE meta(key LONGVARCHAR NOT NULL UNIQUE PRIMARY KEY, value LONGVARCHAR);
CREATE TABLE urls(id INTEGER PRIMARY KEY AUTOINCREMENT,url LONGVARCHAR,title LONGVARCHAR,visit_count INTEGER DEFAULT 0 NOT NULL,typed_count INTEGER DEFAULT 0 NOT NULL,last_visit_time INTEGER NOT NULL,hidden INTEGER DEFAULT 0 NOT NULL);
CREATE TABLE sqlite_sequence(name,seq);
CREATE TABLE visit_source(id INTEGER PRIMARY KEY,source INTEGER NOT NULL);
CREATE TABLE keyword_search_terms (keyword_id INTEGER NOT NULL,url_id INTEGER NOT NULL,term LONGVARCHAR NOT NULL,normalized_term LONGVARCHAR NOT NULL);
CREATE INDEX keyword_search_terms_index1 ON keyword_search_terms (keyword_id, normalized_term);
CREATE INDEX keyword_search_terms_index2 ON keyword_search_terms (url_id);
CREATE INDEX keyword_search_terms_index3 ON keyword_search_terms (term);
CREATE TABLE downloads (id INTEGER PRIMARY KEY,guid VARCHAR NOT NULL,current_path LONGVARCHAR NOT NULL,target_path LONGVARCHAR NOT NULL,start_time INTEGER NOT NULL,received_bytes INTEGER NOT NULL,total_bytes INTEGER NOT NULL,state INTEGER NOT NULL,danger_type INTEGER NOT NULL,interrupt_reason INTEGER NOT NULL,hash BLOB NOT NULL,end_time INTEGER NOT NULL,opened INTEGER NOT NULL,last_access_time INTEGER NOT NULL,transient INTEGER NOT NULL,referrer VARCHAR NOT NULL,site_url VARCHAR NOT NULL,tab_url VARCHAR NOT NULL,tab_referrer_url VARCHAR NOT NULL,http_method VARCHAR NOT NULL,by_ext_id VARCHAR NOT NULL,by_ext_name VARCHAR NOT NULL,etag VARCHAR NOT NULL,last_modified VARCHAR NOT NULL,mime_type VARCHAR(255) NOT NULL,original_mime_type VARCHAR(255) NOT NULL, embedder_download_data VARCHAR NOT NULL DEFAULT '');
CREATE TABLE downloads_url_chains (id INTEGER NOT NULL,chain_index INTEGER NOT NULL,url LONGVARCHAR NOT NULL, PRIMARY KEY (id, chain_index) );
CREATE TABLE downloads_slices (download_id INTEGER NOT NULL,offset INTEGER NOT NULL,received_bytes INTEGER NOT NULL,finished INTEGER NOT NULL DEFAULT 0,PRIMARY KEY (download_id, offset) );
CREATE TABLE segments (id INTEGER PRIMARY KEY,name VARCHAR,url_id INTEGER NON NULL);
CREATE INDEX segments_name ON segments(name);
CREATE INDEX segments_url_id ON segments(url_id);
CREATE TABLE segment_usage (id INTEGER PRIMARY KEY,segment_id INTEGER NOT NULL,time_slot INTEGER NOT NULL,visit_count INTEGER DEFAULT 0 NOT NULL);
CREATE INDEX segment_usage_time_slot_segment_id ON segment_usage(time_slot, segment_id);
CREATE INDEX segments_usage_seg_id ON segment_usage(segment_id);
CREATE TABLE typed_url_sync_metadata (storage_key INTEGER PRIMARY KEY NOT NULL,value BLOB);
CREATE INDEX urls_url_index ON urls (url);
CREATE TABLE content_annotations (visit_id INTEGER PRIMARY KEY,floc_protected_score DECIMAL(3, 2),categories VARCHAR,page_topics_model_version INTEGER,annotation_flags INTEGER DEFAULT 0 NOT NULL, entities VARCHAR, related_searches VARCHAR, visibility_score NUMERIC DEFAULT -1, search_normalized_url, search_terms LONGVARCHAR, alternative_title, page_language VARCHAR, password_state INTEGER DEFAULT 0 NOT NULL);
CREATE TABLE context_annotations(visit_id INTEGER PRIMARY KEY,context_annotation_flags INTEGER DEFAULT 0 NOT NULL,duration_since_last_visit INTEGER,page_end_reason INTEGER, total_foreground_duration NUMERIC DEFAULT -1000000, browser_type INTEGER DEFAULT 0 NOT NULL, window_id INTEGER DEFAULT -1 NOT NULL, tab_id INTEGER DEFAULT -1 NOT NULL, task_id INTEGER DEFAULT -1 NOT NULL, root_task_id INTEGER DEFAULT -1 NOT NULL, parent_task_id INTEGER DEFAULT -1 NOT NULL, response_code INTEGER DEFAULT 0 NOT NULL);
CREATE TABLE downloads_reroute_info (download_id INTEGER NOT NULL,reroute_info_serialized VARCHAR NOT NULL,PRIMARY KEY (download_id) );
CREATE TABLE IF NOT EXISTS "visits"(id INTEGER PRIMARY KEY AUTOINCREMENT,url INTEGER NOT NULL,visit_time INTEGER NOT NULL,from_visit INTEGER,transition INTEGER DEFAULT 0 NOT NULL,segment_id INTEGER,visit_duration INTEGER DEFAULT 0 NOT NULL,incremented_omnibox_typed_score BOOLEAN DEFAULT FALSE NOT NULL,opener_visit INTEGER, originator_cache_guid TEXT, originator_visit_id INTEGER, originator_from_visit INTEGER, originator_opener_visit INTEGER);
CREATE INDEX visits_url_index ON visits (url);
CREATE INDEX visits_from_index ON visits (from_visit);
CREATE INDEX visits_time_index ON visits (visit_time);
CREATE INDEX visits_originator_id_index ON visits (originator_visit_id);
CREATE TABLE cluster_keywords(cluster_id INTEGER NOT NULL,keyword VARCHAR NOT NULL,type INTEGER NOT NULL,score NUMERIC NOT NULL,collections VARCHAR NOT NULL);
CREATE TABLE clusters(cluster_id INTEGER PRIMARY KEY,should_show_on_prominent_ui_surfaces BOOLEAN NOT NULL,label VARCHAR NOT NULL,raw_label VARCHAR NOT NULL);
CREATE TABLE clusters_and_visits(cluster_id INTEGER NOT NULL,visit_id INTEGER NOT NULL,score NUMERIC NOT NULL,engagement_score NUMERIC NOT NULL,url_for_deduping LONGVARCHAR NOT NULL,normalized_url LONGVARCHAR NOT NULL,url_for_display LONGVARCHAR NOT NULL,PRIMARY KEY(cluster_id,visit_id))WITHOUT ROWID;
CREATE INDEX clusters_for_visit ON clusters_and_visits(visit_id);
It means these tables are relevant for URLs:
CREATE TABLE urls(
id INTEGER PRIMARY KEY AUTOINCREMENT,
url LONGVARCHAR,
title LONGVARCHAR,
visit_count INTEGER DEFAULT 0 NOT NULL,
typed_count INTEGER DEFAULT 0 NOT NULL,
last_visit_time INTEGER NOT NULL,
hidden INTEGER DEFAULT 0 NOT NULL);
CREATE TABLE downloads_url_chains (
id INTEGER NOT NULL,
chain_index INTEGER NOT NULL,
url LONGVARCHAR NOT NULL,
PRIMARY KEY (id, chain_index) );
None of them contain tab or session information but if your tabs/session files are busted there is not much else you can do than read these tables.
Sqlite syntax and errors
Given each new SQL database engine has its own gotchas, I of course bumped in a few. For example when trying to get an idea on how many URLs were stored, bumped into this:
C:\temp> sqlite3 c:\temp\History "select count(*) as all, count(distinct url) as unique from urls"
Error: in prepare, near "all": syntax error
select count(*) as all, count(distinct url) as unique from urls
^--- error here
followed by
C:\temp> sqlite3 c:\temp\History "select count(*) as any, count(distinct url) as unique from urls"
Error: in prepare, near "unique": syntax error
select count(*) as any, count(distinct url) as unique from urls
error here ---^
It looks like all and unique are reserved words under all circumstances, which means you need to quote any English word that potentially is or could become a reserved word:
C:\temp>sqlite3 c:\temp\History -json "select count(*) as 'any', count(distinct url) as 'unique' from urls"
[{"any":10373,"unique":10373}]
In this case there are no duplicate url entries in the History table. I checked systems with larger History files too and these did also not have duplicates. My assumption is that Chrome will update any existing entry with the most recent last_visit_time.
This is great, as it will make querying a lot easier.
Queries
zzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzz
Searches and results
- [Wayback/Archive] chrome session_ tabs_ timestamp decode epoch – Google Search
- [Wayback/Archive] 11644473600 – Google Search
- [Wayback/Archive] c++ – Convert Windows Filetime to second in Unix/Linux – Stack Overflow (thanks [Wayback/Archive] ARH and [Wayback/Archive] Eugene)
Q
I have a trace file that each transaction time represented in Windows filetime format. These time numbers are something like this:
128166372003061629
128166372016382155
128166372026382245
Would you please let me know if there are any C/C++ library in Unix/Linux to extract actual time (specially second) from these numbers ? May I write my own extraction function ?
A
it’s quite simple: the windows epoch starts 1601-01-01T00:00:00Z. It’s 11644473600 seconds before the UNIX/Linux epoch (1970-01-01T00:00:00Z). The Windows ticks are in 100 nanoseconds. Thus, a function to get seconds from the UNIX epoch will be as follows:
#define WINDOWS_TICK 10000000
#define SEC_TO_UNIX_EPOCH 11644473600LL
unsigned WindowsTickToUnixSeconds(long long windowsTicks)
{
return (unsigned)(windowsTicks / WINDOWS_TICK - SEC_TO_UNIX_EPOCH);
}
- [Wayback/Archive] 11644473600 seconds to years – Google Search
- [Wayback/Archive] dump chrome history file – Google Search
- [Wayback/Archive] Export Chrome History – Chrome Web Store (which has few selection criteria and can only export to CSV or JSON)
- [Wayback/Archive] Can Chrome browser history be exported to an HTML file? – Super User (thanks [Wayback/Archive] MountainMan, [Wayback/Archive] Traveling Tech Guy and [Wayback/Archive] Brad Patton)
Q
In Google Chrome, is there a built-in method to make an HTML file I can save to my local machine, like the file for bookmarks?
A
For posterity’s sake, the best method to get all the bookmarks data (url + date) as a CSV file is described [Wayback/Archive] in this article.
TL;DR:
- Make sure you have [Wayback/Archive] sqlite3 installed in your system. You can use compiled binaries for Windows systems.
- Locate the History file (on Mac:
cd ~/Library/Application\ Support/Google/Chrome/Default/. On Windows: cd "%LocalAppData%\Google\Chrome\User Data\Default".
- Copy the file History to another location (you can’t use the original while Chrome is open).
- From a command line:
C:\> sqlite3 History
sqlite> .headers on
sqlite> .mode csv
sqlite> .output my-history.csv
sqlite> SELECT datetime(last_visit_time/1000000-11644473600,'unixepoch','localtime'), url FROM urls ORDER BY last_visit_time DESC
You should now have a file called my-history.csv containing all URLs and dates.
Script as a gist can be found [Wayback/Archive] here.
Hopefully this works for you in 2016. Can’t promise it will in 2019 though :)
Update December 2019
Greetings from the future :)
I can confirm the Sqlite 3 solution is still working in 2019, and actually works with other Chromium-based browsers (recently tested successfully with Brave 1.1.20).
C
A
There is a tool called Chrome History View that exports to several different formats, including HTML. There is a writeup of the tool here.

- [Wayback/Archive] file format of chrome tabs and sessions files – Google Search
- [Wayback/Archive] convert windows unix epoch console – Google Search
- [Wayback/A] command line – How do I convert an epoch timestamp to a human readable format on the cli? – Unix & Linux Stack Exchange (thanks [Wayback/Archive] xenoterracide, [Wayback/Archive] Gilles ‘SO- stop being evil’, [Wayback/Archive] Richard Hansen, and [Wayback/Archive] Chris Markle), note that MacOS is BSD based.
A
On *BSD:
date -r 1234567890
On Linux (specifically, with GNU coreutils ≥5.3):
date -d @1234567890
With older versions of GNU date, you can calculate the relative difference to the UTC epoch:
date -d '1970-01-01 UTC + 1234567890 seconds'
If you need portability, you’re out of luck. The only time you can format with a POSIX shell command (without doing the calculation yourself) line is the current time. In practice, Perl is often available:
perl -le 'print scalar localtime $ARGV[0]' 1234567890
C
+1 for the comment about the lack of portability (why doesn’t the POSIX spec include a way to do this? grr)
C
What does the @ mean in date -d @1234567890? man date made no reference to that…
C
@ChrisMarkle GNU man pages are often woefully incomplete. “The date string format is more complex than is easily documented here but is fully described in the info documentation.” To wit: gnu.org/software/coreutils/manual/html_node/…
- [Wayback/Archive] Seconds since the Epoch (GNU Coreutils 9.1)
If you precede a number with ‘@’, it represents an internal timestamp as a count of seconds. The number can contain an internal decimal point (either ‘.’ or ‘,’); any excess precision not supported by the internal representation is truncated toward minus infinity. Such a number cannot be combined with any other date item, as it specifies a complete timestamp.
- [Wayback/Archive] powershell convert unix timestamp to datetime – Google Search (thanks [Wayback/Archive] mrwh1t3 and [Wayback/Archive] Fredrick)
- [Wayback/Archive] epoch – Convert Unix time with PowerShell – Stack Overflow (thanks [Wayback/Archive] mrwh1t3 and [Wayback/Archive] Fredrick)
Q
I am parsing an SQLite database using the PowerShell SQLite module, and a couple of the return values are created and modified, both of which are in Unix time.
What I would like to do is somehow convert that into “human time”.
A
Use:
(([System.DateTimeOffset]::FromUnixTimeSeconds($unixTime)).DateTime).ToString("s")
FromUnixTimeMilliseconds is also available.
ToString(“s”): Sortable: “The pattern reflects a defined standard (ISO 8601)”
Ref.: Standard Date and Time Format Strings, The Sortable (“s”) Format Specifier
- [Wayback/Archive] Standard date and time format strings: Sortable | Microsoft Learn
- Converting from year/month/date to timestamp is harder and has far fewer results when doing a [Wayback/Archive] 11644473600 encode yyyy mm dd date – Google Search.
Related blog posts
--jeroen
Leave a comment