Neural Network Distiller by Intel AI Lab is a Python package for neural network compression research. Distiller provides a PyTorch environment for prototyping and analyzing compression algorithms.
- Automated Model Compression
- Element-wise pruning using magnitude thresholding, sensitivity thresholding, target sparsity level, and activation statistics
- Structured pruning
- Model control
- Knowledge distillation
- Scheduling of pruning, regularization, and learning rate decay
- Element-wise and filter-wise pruning sensitivity analysis
- Conditional computation
- Jupyter notebooks
Not reviews found
Compare MLOPS Software Now
Search, compare, and choose the right software which help you and your team with your machine learning project.
Compare Neptune to other MLOPS tools