Open source is synonymous with AI and is largely responsible for the rapid growth of AI in the last few years. AI development platforms like TensorFlow, Keras, and CNTK are all open-source frameworks with their code posted on GitHub, which developers can improve upon and create different versions if needed. Although each of these frameworks has its private backer—TensorFlow with Google, Keras with Facebook, and CNTK with Microsoft—they are all largely driven by a growing ecosystem of third-party developers. The collaborative nature of developing AI algorithms and models using the frameworks, with the research openly published on arXiv, the code downloadable on GitHub for reproducibility, has altogether fueled the AI revolution, somewhat unique to other technology revolutions in the past.
Expanding Model and Data Repositories to Accommodate Deep Learning
However, the open-source nature of AI collaboration only goes so far within the enterprise. Enabling AI model development on open-source frameworks is just one part of the process. In addition to model building, enterprise AI development needs to have robust model and data repositories that can be easily onboarded, with the ability to execute models in different target environments, and additional capabilities that focus on enhancing and sharing models.
Enterprise AI platforms have been largely proprietary in nature. Most of the enterprise AI platforms have been focused on traditional machine learning capabilities, what is known as data science or data analytics. These include a range of platform players, such as RapidMiner, DataRobot, SAS, Alteryx, Domino, IBM, Microsoft, KNIME, H2O.ai, Dataiku, SAP, and others.
Support for deep learning models is not that widespread in enterprise data science platforms, with some vendors being ahead of the others. And that is where the problem lies. For the most part, data analytics tools that are common across many enterprise segments have been slow to join the deep learning revolution. The open-source frameworks like TensorFlow and Keras are largely deep learning frameworks. Deep learning is a special branch of machine learning that uses deep neural networks like convolutional neural networks (CNNs), recurrent neural networks (RNNs), long short-term memory (LSTM) networks, or Q-learning. Deep learning models are better suited for perceptual problems like vision or language, however, their applications are also extending into broader data classification or predictive analytics where traditional machine learning algorithms have been used. Another branch of AI, reinforcement learning, which has massive potential for enterprise applications, is also largely supported on open-source frameworks, rather than enterprise AI platforms.
Matching the Platform Development Capabilities of Hyperscalers
Hyperscalers like Google, Facebook, and Microsoft have already developed their own internal platforms to super-scale deep learning models and productize them within their own companies. For example, Google has been able to quickly turn DeepMind’s AlphaGo model into saving energy across datacenters. They end up using their own model development frameworks, but then complement that with other pieces to provide a larger enterprise-scale framework to scale AI within their organization. What is needed in the wider enterprise context is an open-source platform that provides capabilities that are similar to what the hyperscalers have internally.
In addition to the technology gap that is emerging between hyperscalers and other enterprise companies in terms of AI adoption, most proprietary enterprise data science platforms also limit users from exporting models, constraining their usage and scalability.
Acumos Is a Good Barometer for Measuring AI Application Development
Acumos was originally started by the Linux Foundation as an open-source telecom AI platform created by AT&T and Tech Mahindra, but it has now grown into a more general platform for developing AI applications across the enterprise. Acumos AI is an open-source framework that aims to simplify development of AI applications by wrapping tools like TensorFlow with a common application programming interface (API) that allows them to seamlessly connect by leveraging microservices. Developers can export libraries, models, and all other required information as Docker files. Acumos is also creating a marketplace for AI models that catalogs and stores AI models developed and contributed by the community.
Acumos has garnered widespread support, growing from AT&T and Tech Mahindra to include other companies like Amdocs, Baidu, Huawei, Nokia, Tencent, Univa, and ZTE. In the short time that Acumos has been around, it has seen a wide variety of use cases from 5G and software-defined networks (SDNs), media and entertainment, security, automotive, the Internet of Things (IoT), home automation, mobile, infrastructure, manufacturing, real estate, retail, travel, transportation & logistics, and utilities. The rapid growth of use cases is not surprising, as the AI developer community and the wider enterprise ecosystem needs an open-source platform like Acumos to accelerate enterprise AI adoption. More details about the growing membership and use cases are expected to be announced in the next few months.
In the meantime, it will be interesting to see what percentage of use cases use deep learning models, or even reinforcement learning. In the long run, having the Acumos platform open sourced will be a good barometer for measuring AI development in the enterprise. How Acumos impacts the offerings of proprietary AI platform providers will also be interesting to watch.