Simple extensive tar-like archive format with indexing
Перейти к файлу
Mark Lee 955c1f2b35 Update CHANGELOG between 0.14.0 and 1.0.0 2019-02-20 17:40:32 -08:00
bin fix: Fix standard issues and deprecations 2019-02-14 09:23:46 -08:00
lib feat: convert the public API to Promises 2019-02-18 22:16:40 -08:00
test feat: convert the public API to Promises 2019-02-18 22:16:40 -08:00
.gitattributes Use LF line endings on test/input files 2016-08-18 16:12:39 -07:00
.gitignore Compatibility with running inside electron 2017-03-30 16:11:26 +02:00
.npmignore adding .idea to .npmignore 2017-03-30 16:54:04 +02:00
.travis.yml Test against supported LTS Node versions 2018-10-01 08:46:25 -07:00
CHANGELOG.md Update CHANGELOG between 0.14.0 and 1.0.0 2019-02-20 17:40:32 -08:00
LICENSE.md Rename LICENSE file name. 2016-09-14 15:18:50 +08:00
README.md Update AppVeyor badge URL 2017-03-14 09:36:38 -07:00
appveyor.yml Test against supported LTS Node versions 2018-10-01 08:46:25 -07:00
package-lock.json chore: upgrade mocha to ^6.0.0 2019-02-18 22:33:55 -08:00
package.json chore: upgrade mocha to ^6.0.0 2019-02-18 22:33:55 -08:00
snapcraft.yaml Add snapcraft.yaml 2017-10-04 16:27:53 +01:00

README.md

asar - Electron Archive

Travis build status AppVeyor build status dependencies npm version

Asar is a simple extensive archive format, it works like tar that concatenates all files together without compression, while having random access support.

Features

  • Support random access
  • Use JSON to store files' information
  • Very easy to write a parser

Command line utility

Install

$ npm install asar

Usage

$ asar --help

  Usage: asar [options] [command]

  Commands:

    pack|p <dir> <output>
       create asar archive

    list|l <archive>
       list files of asar archive

    extract-file|ef <archive> <filename>
       extract one file from archive

    extract|e <archive> <dest>
       extract archive


  Options:

    -h, --help     output usage information
    -V, --version  output the version number

Excluding multiple resources from being packed

Given:

    app
(a) ├── x1
(b) ├── x2
(c) ├── y3
(d) │   ├── x1
(e) │   └── z1
(f) │       └── x2
(g) └── z4
(h)     └── w1

Exclude: a, b

$ asar pack app app.asar --unpack-dir "{x1,x2}"

Exclude: a, b, d, f

$ asar pack app app.asar --unpack-dir "**/{x1,x2}"

Exclude: a, b, d, f, h

$ asar pack app app.asar --unpack-dir "{**/x1,**/x2,z4/w1}"

Using programatically

Example

var asar = require('asar');

var src = 'some/path/';
var dest = 'name.asar';

asar.createPackage(src, dest, function() {
  console.log('done.');
})

Please note that there is currently no error handling provided!

Transform

You can pass in a transform option, that is a function, which either returns nothing, or a stream.Transform. The latter will be used on files that will be in the .asar file to transform them (e.g. compress).

var asar = require('asar');

var src = 'some/path/';
var dest = 'name.asar';

function transform(filename) {
  return new CustomTransformStream()
}

asar.createPackageWithOptions(src, dest, { transform: transform }, function() {
  console.log('done.');
})

Using with grunt

There is also an unofficial grunt plugin to generate asar archives at bwin/grunt-asar.

Format

Asar uses Pickle to safely serialize binary value to file, there is also a node.js binding of Pickle class.

The format of asar is very flat:

| UInt32: header_size | String: header | Bytes: file1 | ... | Bytes: file42 |

The header_size and header are serialized with Pickle class, and header_size's Pickle object is 8 bytes.

The header is a JSON string, and the header_size is the size of header's Pickle object.

Structure of header is something like this:

{
   "files": {
      "tmp": {
         "files": {}
      },
      "usr" : {
         "files": {
           "bin": {
             "files": {
               "ls": {
                 "offset": "0",
                 "size": 100,
                 "executable": true
               },
               "cd": {
                 "offset": "100",
                 "size": 100,
                 "executable": true
               }
             }
           }
         }
      },
      "etc": {
         "files": {
           "hosts": {
             "offset": "200",
             "size": 32
           }
         }
      }
   }
}

offset and size records the information to read the file from archive, the offset starts from 0 so you have to manually add the size of header_size and header to the offset to get the real offset of the file.

offset is a UINT64 number represented in string, because there is no way to precisely represent UINT64 in JavaScript Number. size is a JavaScript Number that is no larger than Number.MAX_SAFE_INTEGER, which has a value of 9007199254740991 and is about 8PB in size. We didn't store size in UINT64 because file size in Node.js is represented as Number and it is not safe to convert Number to UINT64.