Much like low-code/no-code has empowered citizen developers to own their IT processes, we are beginning to see the combination of open source and low-code/no-code AI become a catalyst for democratized AI applications.
Last year saw an explosion of advances in artificial intelligence (AI) models and tools – making the technology and its myriad use cases accessible to novice developers and general business users.
In the next wave of digital transformation, we will see artificial intelligence technology propel the enterprise forward, bringing superior automation to creative processes like writing and video production. In the near future, business users will be empowered to use AI-based technology in their everyday workflows.
One of the particularly notable advances in artificial intelligence is the growth propelled by open-source models. A few key business leaders have already begun to experiment in this space, pushing AI’s innovation even further. Following the success of GPT and GPT-3, there is a greater opportunity for AI acceleration via a new kind of software development. Through their partnership, Microsoft and NVIDIA have built a mega model, MT-NLG, that makes it possible for developers to build new artificial intelligence solutions, furthering AI’s accessibility.
Within a few months of the launch of the highly praised image-generating model DALL-E2, we’ve seen an explosion of open-source versions of the same. Models like Midjourney and Stable Diffusion gave virtually everyone with a powerful enough computer the ability to create beautiful text-to-image graphics using only a simple prompt.
Developers, and consequently, enterprise organizations, are building on the momentum that these open-source programs have created. Along with developments in generative AI technology, we are seeing new applications of AI, driven by start-ups like SambaNova, reimagine the foundations of information sharing in business. They are redefining which tools we use to accelerate business transformation and are opening new avenues for value creation. Here are six emerging trends to watch as we head into 2023:
- Text-to-Video. In addition toDALL-E and HuggingFace, programs like Meta’s Make-A-Video and RunwayML are making it possible to create videos with only a few keywords. Developers continue to refine these programs, adding layers of innovation to the already established process and paving the way for average creators to build on what was once considered an automation-proof process—design.
- AI and User Experience. More and more, we are seeing consumers and clients prioritize experience, so much so that it has become a key differentiating factor for both B2C and B2B brands. According to PwC, 17% of consumers say they will abandon a brand after just one poor experience.
No longer is excellent customer service reserved for traditional industries like hospitality or luxury fashion, it is everywhere, and it is key to building brand loyalty. According to the same report, 63% of consumers said they would be willing to share their personal data (like location, age, and lifestyle) with a brand that they valued and felt valued by. Using new generative AI models, non-data scientists are taking this first-party data to explore how they can build more interactive consumer experiences using artificial intelligence.
- The Rise of Phygital. Phygital: a blend of the words physical and digital that is used to describe an experience with a brand that combines digital touchpoints (on your mobile phone, for example) with an in-person interactive experience. If you’ve ever shopped at an AmazonGo store, you’ve felt the effects of phygital first-hand. AI will continue to widen the possibility of phygital to create new revenue opportunities and novel experiences for the end-user.
- Collaborative AI. The applications of AI grow exponentially when you have the ability to train artificial intelligence at the data’s source. Using federated learning technology, tools like Integrate.ai, Google Federated, and NVIDIA Flare can train artificial intelligence models using data that can’t be moved from its source. This is critical in furthering AI’s usage in highly regulated industries like BFSI and healthcare, where the data can’t be transferred because of its level of sensitivity.
- Small Footprint AI. Small footprint AI, also called Tiny AI, is emerging to combat the amount of energy and physical space needed to generate learning while also reducing the carbon footprint of traditional, resource-intensive AI modeling. Tiny AI is designed to pack the computational power of existing deep learning models into a tighter physical space. This has big implications for self-driving cars where, without the necessity of pinging the cloud for deep learning, reaction time becomes significantly faster.
- Small Data AI. Though similar in name to small footprint AI, small data AI refers to the collection of learnings an AI model generates from a small data set. Traditionally, small data sets were easily digestible by general business users without significant human error. Though still true, the implementation of small data AI to understand and learn from micro sets of data is the next step in bringing more efficiency to the enterprise workflow.
Much like low-code/no-code has empowered citizen developers to own their IT processes, we are beginning to see the combination of open source and low-code/no-code AI become a catalyst for democratized artificial intelligence applications. We are starting to watch this come to life in applications like RunwayML that rely on simple prompts to yield advanced results. In reality, much of the complex engineering is hidden below the surface in order to make the end-user experience more intuitive and rewarding.