A ZIP archive appears to be a database containing a single table with the following schema:
CREATE TABLE zip(
name, -- Name of the file
mode, -- Unix-style file permissions
mtime, -- Timestamp, seconds since 1970
sz, -- File size after decompression
rawdata, -- Raw compressed file data
data, -- Uncompressed file content
method -- ZIP compression method code
);
So, for example, if you wanted to see the compression efficiency (expressed as the size of the compressed content relative to the original uncompressed file size) for all files in the ZIP archive, sorted from most compressed to least compressed, you could run a query like this:
sqlite> SELECT name, (100.0*length(rawdata))/sz FROM zip ORDER BY 2;
Or using file I/O functions, you can extract elements of the ZIP archive:
sqlite> SELECT writefile(name,content) FROM zip WHERE name LIKE 'docProps/%';
Sounds like an optional module that lets you avoid needing to clobber something with python/node/perl and bash to turn zipped files into sqlite data, or vice versa.
The actual documentation is quite interesting:
A ZIP archive appears to be a database containing a single table with the following schema:
So, for example, if you wanted to see the compression efficiency (expressed as the size of the compressed content relative to the original uncompressed file size) for all files in the ZIP archive, sorted from most compressed to least compressed, you could run a query like this: Or using file I/O functions, you can extract elements of the ZIP archive: https://sqlite.org/cli.html#zipdbNot sure I’d ever want to use zip. Sqlar is nice though.
https://sqlite.org/sqlar/doc/trunk/README.md
sqlar should be more prominent in the sqlite docu.
Sounds like feature creep. Zip is not a database, at best its just slow. Use a archive utility to handle archives.
Sounds like an optional module that lets you avoid needing to clobber something with python/node/perl and bash to turn zipped files into sqlite data, or vice versa.