Show HN: Inferrd – open-source ML Deployment

Show HN: Inferrd – open-source ML Deployment


GitHub stars License

ML Deployment made simple, scalable and collaborative
The new open-source standard to deploy, monitor and secure Machine Learning services.

Inferrd is on a mission to make machine learning deployment a breeze.

  • Maintenance-free environments. Just drag and drop your models, and get an API.
  • Your data stays in your cloud. Have full control over where you send your data, and how it’s processed.
  • No more security compliance process to go through as Inferrd is self-hosted.
  • No more pricing indexed on volume, as cloud-based solutions offer.

Quick start

Inferrd can be installed by installing docker and cloning this repository.

Make sure docker is installed on your inference server. You can install docker with:

curl -fsSL | sh

Then run the following commands to get started:

git clone
cd inferrd
bash ./

Now visit http://localhost: 8080 and login with email admin and password admin.


  • Deploy form notebook: Use the Inferrd SDK to deploy from your notebook.
  • Wide framework support: Deploy TensorFlow, PyTorch, HuggingFace, ONNX, Scikit and XGBoost hassle free.
  • GPU support: GPU acceleration comes pre-configured out of the box.
  • Monitoring: Monitoring is available as soon as you make your first request. Prometheus monitoring is also available.
  • Versioning: Keep track of all the versions you deploy and rollback anytime.
  • Request logging: Inspect all requests coming into your model.

Community support

For general help using Inferrd, please refer to the official Inferrd documentation. For additional help, you can use one of these channels to ask a question:

Entreprise Edition

Are you looking for support and more advanced features such as RBAC, AutoScale, Multi-node deployments and A/B testing? Contact us.

Read More

Charlie Layers

Charlie Layers

Fill your life with experiences so you always have a great story to tell