We build and maintain
The hardest part of building an AI product is usually not the model. There are automated tools that give you the best model for your dataset, yet products involving artificial intelligence often fail. As many companies are learning the hard way, the real challenge is productionisation of AI.
A huge amouny of software and infrastructure needs to be in place for reliably training and deploying models. Data needs to be gathered properly for it to be useful for models, and processed in a way which is scalable yet easy to change in the face of changing requirements. Models are created and tuned by data scientists, yet often deployed by engineers, which adds further complexity and delays to the process. A failure in any one of these areas can render your whole system useless, which is often costly to track down and fix.
During the course of delivering various end-to-end AI solutions to our clients, subspace has developed a methodology which ensures smooth coordination between the different functions of data science, engineering and data acqusition. It is critical to have a process which allows your team to iterate on ideas fast without risking any downtime to your service.
Get in touch!Send us an e-mail if you'd like to schedule a free consultation.
Analysis, process & architecture design
Exploratory data analysis
Building data pipelines, big and small
Developing, training and deploying machine learning models
Monitoring and maintaining machine learning models
- Clouds: Amazon Web Services, Google Cloud Platform, Heroku
- DevOps & MLOps: Terraform, Kubernetes, Kubeflow, Airflow
- Deep learning: PyTorch, TensorFlow, MXNet
- Machine learning: scikit-learn, xgboost, spacy
- Big data: Apache Spark (PySpark), Apache Beam, GCP Dataflow
- Streaming data: Apache Kafka, RabbitMQ, Redis
- Web development: reagent, re-frame, react, D3