Check the preview of 2nd version of this platform being developed by the open MLCommons taskforce on automation and reproducibility as a free, open-source and technology-agnostic on-prem platform.
package:caffemodel2-deepscale-squeezenet-1.1 (v1.0.0)
License: BSD
Creation date: 2017-05-14
Source: GitHub
cID: 1dc07ee0f4742028:d1d03644761868f6

Don't hesitate to get in touch if you encounter any issues or would like to discuss this community project!
Please report if this CK component works: 1  or fails: 0 
Sign up to be notified when artifacts are shared or updated!

Description  

This meta package is our attempt to provide a unified Python API, CLI and JSON meta description for different package managers and building tools to automatically download and install different components (models, data sets, libraries, frameworks, tools) necessary to run portable program pipelines across evolving platforms. Our on-going project is to make the onboarding process as simple as possible via this platform. Please check this CK white paper and don't hesitate to contact us if you have suggestions or feedback!

Dependencies    

ReadMe  

The Caffe-compatible files that you are probably looking for:

SqueezeNet_v1.0/train_val.prototxt          #model architecture
SqueezeNet_v1.0/solver.prototxt             #additional training details (learning rate schedule, etc.)
SqueezeNet_v1.0/squeezenet_v1.0.caffemodel  #pretrained model parameters

If you find SqueezeNet useful in your research, please consider citing the SqueezeNet paper:

@article{SqueezeNet,
    Author = {Forrest N. Iandola and Matthew W. Moskewicz and Khalid Ashraf and Song Han and William J. Dally and Kurt Keutzer},
    Title = {SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and $<$1MB model size},
    Journal = {arXiv:1602.07360},
    Year = {2016}
}

Helpful hints:

  1. Getting the SqueezeNet model: git clone <this repo>. In this repository, we include Caffe-compatible files for the model architecture, the solver configuration, and the pretrained model (4.8MB uncompressed).

  2. Batch size. We have experimented with batch sizes ranging from 32 to 1024. In this repo, our default batch size is 512. If implemented naively on a single GPU, a batch size this large may result in running out of memory. An effective workaround is to use hierarchical batching (sometimes called "delayed batching"). Caffe supports hierarchical batching by doing train_val.prototxt>batch_size training samples concurrently in memory. After solver.prototxt>iter_size iterations, the gradients are summed and the model is updated. Mathematically, the batch size is batch_size * iter_size. In the included prototxt files, we have set (batch_size=32, iter_size=16), but any combination of batch_size and iter_size that multiply to 512 will produce eqivalent results. In fact, with the same random number generator seed, the model will be fully reproducable if trained multiple times. Finally, note that in Caffe iter_size is applied while training on the training set but not while testing on the test set.

  3. Implementing Fire modules. In the paper, we describe the expand portion of the Fire layer as a collection of 1x1 and 3x3 filters. Caffe does not natively support a convolution layer that has multiple filter sizes. To work around this, we implement expand1x1 and expand3x3 layers and concatenate the results together in the channel dimension.

  4. The SqueezeNet team has released a few variants of SqueezeNet. Each of these include pretrained models, and the non-compressed versions include training protocols, too.

SqueezeNet v1.0 (in this repo), the base model described in our SqueezeNet paper.

Compressed SqueezeNet v1.0, as described in the SqueezeNet paper.

SqueezeNet v1.0 with Residual Connections, which delivers higher accuracy without increasing the model size.

SqueezeNet v1.0 with Dense→Sparse→Dense (DSD) Training, which delivers higher accuracy without increasing the model size.

SqueezeNet v1.1 (in this repo), which requires 2.4x less computation than SqueezeNet v1.0 without diminshing accuracy.

  1. Community adoption of SqueezeNet:

SqueezeNet in the MXNet framework, by Guo Haria

SqueezeNet in the Chainer framework, by Eddie Bell

SqueezeNet in the Keras framework, by dt42.io

Neural Art using SqueezeNet, by Pavel Gonchar

SqueezeNet compression in Ristretto, by Philipp Gysel

What's new in SqueezeNet v1.1?

SqueezeNet v1.0 SqueezeNet v1.1
conv1: 96 filters of resolution 7x7 64 filters of resolution 3x3
pooling layers: pool_{1,4,8} pool_{1,3,5}
computation 1.72 GFLOPS/image 0.72 GFLOPS/image: 2.4x less computation
ImageNet accuracy >= 80.3% top-5 >= 80.3% top-5

SqueezeNet v1.1 has 2.4x less computation than v1.0, without sacrificing accuracy.

Versions  

Files  

Comments  

Please log in to add your comments!
If you notice any inapropriate content that should not be here, please report us as soon as possible and we will try to remove it within 48 hours!