Ravish Ailinani

Ravish Ailinani

Ravish Ailinani, Partner of Dallas Venture Capital (DVC). Ravish Ailinani has extensive experience in corporate strategy, finance and venture capital investments. He has over $70B of announced transaction experience including M&A, IPOs, growth stage investing, equity and debt capital market transactions and capital raises.

Siddharth Pratahkal

Siddharth Pratahkal

Siddharth Pratahkal interned with Dallas Venture Capital and is an MBA candidate at the Yale School of Management. A passionate problem-solver, he's highly interested in the confluence of business, investing and deep tech. He's worked with startups on product management, GTM and growth strategies.

Generating Value: Exploring Investment Opportunities in the Generative AI Landscape

1

Table of contents:

  1. Executive Summary
  2. Introduction to Generative AI
    • Generative AI through the ages
  3. Technical Overview
  4. Market size, opportunities and growth
    • Market Segmentation
    • Recent trends
  5. Use Cases
    • Generative AI, LLMs and Conversational AI
    • Generative AI in the VC world
  6. Framework
    • Glossary of terms
  7. Deep dive into the AI Ops and Tooling layer
  8. Responsible AI
    • Caveats to current LLMs and going forward
  9. Other future opportunities?
    • LLM Management Layer
    • LLM Operations
    • From the cloud to the edge
    • Environmental concerns
  • References

I. Executive Summary

Generative AI has been a major focus of interest for investors, entrepreneurs, and several technology giants in recent years. This landscape is in a constant state of evolution, making remarkable strides in a matter of weeks. Today, generative AI stands at the forefront of technological adoption, with exponential improvements and real-world applications reshaping industries.

Key takeaways:

  • Language and Vision: Revolutionizing the Startup Landscape. Language and vision applications have emerged as dominant use cases with natural language interfaces occupying the largest market share.
  • Foundation Models: The Bedrock of Generative AI. An array of generative AI capabilities relies on robust foundation models that serve as the building blocks for innovation. 

  • Shaping the Future: Enhancing Accuracy and Exploring New Frontiers. The future of generative AI will witness a relentless pursuit of enhancing the accuracy and realism of generated content. As the technology progresses, experts envision entirely new realms of content creation, including immersive virtual worlds and interactive narratives. Additionally, the potential applications extend to personalized content generation, targeted advertising, and even automated scientific discovery. 

  • Bridging the Gap: Enterprise and B2B Transformations. Enterprise and B2B solutions are yet to fully catch up in terms of offerings and capabilities with their B2C counterparts [More on this here]. However, these solutions hold immense promise, as they aim to synthesize vast amounts of information, revamp enterprise workflows, and boost productivity for knowledge workers. 

  • Overcoming Challenges: Paving the Way for Enterprise Adoption. The path to enterprise adoption of generative AI presents challenges such as bias and hallucination. Nonetheless, continuous advancements in large language models, improved AI operations, enhanced tooling, and the availability of high-quality training data are driving us closer to the imminent enterprise adoption of generative AI.
In this paper DVC gives a high level background of Generative AI, updates on latest investment trends and proposes a framework to analyze various opportunities and companies in the category.

II. Introduction to Generative AI

Generative AI is a branch of AI that creates new data, like images, text, signals and music, based on existing data. A model tries to uncover the underlying features of the data and uses probability, optimization and statistics to create new and unique observations that look like they came from the original data.

Generative AI is different from other forms of AI because it generates both the inputs and outputs, not just identifying, categorizing or classifying them. To truly understand what generative modeling aims to achieve and why this is important, it is useful to compare it to its counterparts  Discriminative AI or Predictive AI. Discriminative AI tries to distinguish between certain kinds of input, such as answering “Is this image a cat or a dog?” Predictive AI uses historical data to predict likely future events such as “Is this dog likely to go left or right” whereas Generative AI responds to prompts like “Draw a picture of a cat sitting next to a dog.” 

Generative AI through the ages

One of the earliest examples of generative AI was the first computer-generated music, created in 1951 by Christopher Strachey at the University of Manchester.

In the 1960s and 1970s, researchers explored using computers for creative output, adapting the 19th-century Harmonograph device to generate computer-generated art. In the 1980s and 1990s, generative AI gained more attention with advancements in computing power and machine learning algorithms, leading to the development of probabilistic models like Hidden Markov Models and Boltzmann Machines. The “AARON” program, created by artist Harold Cohen, also emerged during this time, utilizing rules and algorithms to produce original artwork.

In the early 2000s, the field of generative AI began to expand rapidly, with the development of new techniques such as deep learning and neural networks. These techniques allowed researchers to create models that could generate more complex and realistic content, such as images and videos. 

In 2014, a Ph.D. student in machine learning at the Université de Montréal, Ian Goodfellow, under the supervision of one of the Godfathers of AI, Prof. Yoshua Bengio, and Prof. Aaron Courville introduced the concept of Generative Adversarial Networks (GANs). GANs are a type of neural network that can generate realistic content by training two models against each other – a generator that creates new content, and a discriminator that evaluates the content to determine whether it is real or fake. GANs have since become one of the most popular and widely used techniques in generative AI. 

Another important advancement was the development of transformers from Google’s landmark paper in 2017 which helped revolutionize the way language tasks are processed. It is still the bedrock of several state-of-art models like GPT-4 and PaLM 2. Transformers are a type of neural network that can learn to generate text by modeling the relationship between words in a sentence using specialized attention frameworks. Transformers have been used to generate high-quality language translations, as well as to generate creative writing and poetry. 

Today, generative AI is a rapidly evolving field that is being used to create new and exciting applications, such as generating realistic images, generating music and speech, and even generating new molecules for drug discovery. As computing power continues to increase and new techniques and algorithms are developed, the possibilities for generative AI are virtually limitless.

III. Technical Overview

Generative AI models use probability distributions to model the data they are trained on. These probability distributions can be simple, such as a Gaussian distribution, or more complex, such as a beta or gamma distribution. The model then learns the parameters of these distributions from the training data and can use them to generate new data points that are similar to the training data. 

The vehicle that drives most current models is Deep Learning. Deep Learning is a subfield of machine learning that involves training artificial neural networks to learn from data and make predictions or decisions. VAEs (see below for detailed explanation) and GANs are two popular approaches for generative modeling that have been used to generate realistic images, music, and text. In both GANs and VAEs, the underlying mathematics involves optimization techniques, such as gradient descent, to learn the parameters of the model. Additionally, these models often use techniques such as regularization to prevent overfitting and improve the generalization performance of the model.

There are several different types of generative AI models, but they all share a common goal of generating new data that looks like it could have come from the original dataset. Here are some common techniques of modeling and generating data: 

Variational Autoencoders (VAEs): A VAE is a type of deep neural network that is trained to encode input data into a lower-dimensional space, and then decode it back into the original space. By learning to compress the data into a lower-dimensional space, the VAE can learn the underlying distribution of the data and generate new samples that are similar to the original data. The VAE is trained to minimize the difference between the generated data and the training data, using a measure called the reconstruction loss. VAEs are slightly older versions of generative AI models that are increasingly being replaced with GAN-based approaches.

Generative Adversarial Networks (GANs): A GAN is a type of neural network that is composed of two parts: a generator and a discriminator. The generator is trained to generate new samples of data that look like they came from the original dataset, while the discriminator is trained to distinguish between real and fake samples. The two networks are trained together in a process called adversarial training, where the generator tries to fool the discriminator, and the discriminator tries to correctly identify the real data. Image-based generative AI models largely use a GAN framework. 

Autoregressive Models: An autoregressive model is a type of time-series model that predicts the probability distribution of the next value in a sequence, based on the previous values. These models can be used to generate new sequences of data that are similar to the original sequence. Autoregressive models are typically used for sequential data, such as text or time series data, where the order of the data is important. 

Transformers: Transformers are a type of neural network architecture designed to handle sequential data, such as text, by allowing the model to attend to different parts of the input sequence as it generates the output sequence. Transformers use a self-attention mechanism to weigh the importance of each word in the input sequence for generating each word in the output sequence. Transformers have been used for various natural language processing tasks, including language modeling, machine translation, and text generation.

Diffusion networks: These are a type of probabilistic model that uses a sequence of steps (known as diffusion steps) to generate high-quality samples from complex data distributions, such as images or video. They do not rely on explicit modeling of the joint probability distribution or on predicting the next token in a sequence. Instead, they model the probability of observing a data sample after a sequence of diffusion steps that add noise to the data, and they can be trained in an unsupervised manner. 

Other modeling methods include Markov Chain Monte Carlo (MCMC), Hamiltonian Monte Carlo (HMC), Variational Inference, Gibbs Sampling, etc. Transformer-based, GAN-based and diffusion network approaches are currently state of the art and are used extensively in the commercial world.

IV. Market size, Opportunities and Growth

According to one report, the global generative AI market has experienced substantial growth, with a revenue of USD 7.9 Billion in 2021 and a compound annual growth rate (CAGR) of 34.3% expected from 2022 to 2030. The North America generative AI market accounted for the largest share of over 40% in 2021, while the Asia-Pacific generative AI market is predicted to achieve a significant CAGR of approximately 36% from 2022 to 2030, indicating substantial growth potential in the region. The global generative AI market is expected to reach $42.6 billion in 2023, says Pitchbook.

Market Segmentation

According to Grandview Research, the global market for generative AI can be categorized by component types, modeling approach, end-user, and region.  

The component types can be divided into software and services, with the software segment holding the largest revenue share of 65.0% in 2021. The service segment is expected to witness the fastest growth rate of 35.5% during the forecast period. 

The market is further segmented by technology, including GAN-based, transformer-based, VAE-based, autoregressive-based and diffusion network-based approaches.  

The transformers segment held the largest revenue share of 41.7% in 2021, primarily due to the increasing adoption of transformers applications such as text-to-image AI. On the other hand, the diffusion networks segment is expected to witness the fastest growth rate of 37.2% during the forecast period. 

Based on end-use, the market is segmented into media & entertainment, BFSI, IT & telecommunications, healthcare, automotive & transportation, and others. The media & entertainment segment accounted for the largest revenue of USD 1,525.6 million in 2021 and is projected to grow at a CAGR of 33.5% over the forecast period. This can be attributed to the increasing adoption of generative AI for creating better advertisement campaigns. The other sub-segment comprises security, aerospace & defense.
Based on region, North America dominated the market with a share of 40.2% in 2021 and is projected to grow at a CAGR of 34.7% over the forecast period. The market can be further bifurcated into regions such as Europe, Asia Pacific, Latin America, and the Middle East & Africa.

Recent trends

OpenAI’s ChatGPT has taken the world by storm and continues to make headlines, making it the fastest growing consumer internet application in history, according to UBS reports. Currently, we have GPT-4 which is a multimodal version of ChatGPT with image and video capabilities. Microsoft announced Co-pilot that integrates the power of Large Language Models (LLMs) into Microsoft Office 365. With this integration, Microsoft hopes to “turn your words into the most powerful productivity tool on the planet.” Google announced Bard “to combine the breadth of the world’s knowledge with the power, intelligence and creativity of our large language models” followed by Generative AI experiences throughout the Workplace. 

Meta Platforms has announced it will be creating a new product group focused on generative AI across WhatsApp, Instagram, Messenger and Facebook. Other companies that have already integrated ChatGPT features include Duolingo, Slack, Snapchat, Bain & Company, Coca-Cola, etc. 

NVIDIA, the dominant provider of GPUs used to train neural networks, and all major providers of infrastructure for AI training, have benefitted hugely from this surge with its valuation crossing $1 trillion.. 

We also saw unexpected impacts of Generative AI, most recently with Chegg, an educational technology company losing 50% of its market cap when management announced that ChatGPT has materially impacted their business model. 

According to CB Insights, the generative AI space has seen 13 companies reach unicorn status (as of May 10, 2023), including OpenAI, Hugging Face, Lightricks, Jasper, Glean, Stability AI with recent additions being Anthropic, Cohere, RunwayML, Replit, Inflection, Adept and Character.ai. 

V. Use Cases

Generative AI’s potential applications are vast and varied, with implications across many industries and fields. Here are some broad categories of how generative AI is being used today and the potential applications in the future: 

Content creation: This includes creating art, music, video, product designs and text. For example, generative AI can be used to create unique and original pieces of art or to generate background music for a video. It can also be used to generate new patterns for clothing or furniture designs. Avatar design is an upcoming and widely popular use case. 

Gaming: Generative AI can be used in the gaming industry to create new game content, such as characters, levels, and quests. It can also be used to enhance the player’s experience by creating dynamic and unpredictable game elements. 

Simulation and modeling: Generative AI can be used to simulate real-world scenarios, such as predicting weather patterns or simulating traffic flow. It can also be used to generate models of complex systems, such as financial markets or the human brain. 

Marketing and advertising: Generative AI can be used to personalize products and services for individual users. For example, it can be used to generate personalized news articles or to create customized shopping experiences. 

Education and research: Generative AI can be used in educational and research settings to create synthetic data and generate hypotheses. It can also be used to simulate complex systems to better understand how they work. 

Healthcare: Generative AI can be used in healthcare to create personalized treatment plans and to generate synthetic data for medical research. Biotechnology companies can use generative AI to model cellular structures, identify new targets, and optimize drug design.  

In conclusion, generative AI is a promising technology, and with its ability to generate new content or data based on input parameters or constraints, can lead to more efficient, innovative, and sustainable solutions to complex problems. As technology continues to develop and evolve, DVC expects to see even more exciting applications of generative AI in the future. 

DVC believes it is too early to fully understand the true impact of broad adoption of Generative AI though we strongly believe it will be transformational. It is highly likely that incumbents with distribution, large R&D budgets and better training data will benefit the most and further enhance their moat. It is possible new use cases from innovative companies allow them to create a beachfront and challenge incumbents. 

Generative AI, LLMs and Conversational AI

There is some confusion on the similarities and differences between Generative AI, LLMs and Conversational AI.  

LLMs are trained on massive amounts of text data, and they can be used to generate text, translate languages, and answer questions. LLMs are essentially models – some form of neural networks that can power conversational AI. Conversational AI is a specific use-case of AI designed to simulate human conversation. It can be used to create chatbots that can answer questions, provide customer service, and even generate creative content. 

LLMs and conversational AI overlap in the sense that they both use language to interact with humans. However, they differ in their goals. LLMs primarily provide text as output, while conversational AI is focused on simulating human conversation. 

LLMs are very good at generating text that is grammatically correct and factually accurate. However, they can sometimes generate text that is nonsensical or even offensive. Conversational AI, on the other hand, is very good at directly (often verbally) continuing human conversation. However, it can sometimes be difficult to understand what it is saying, and it can sometimes give inaccurate or misleading information. Popular conversational AI tools like Google Assistant, Siri and Amazon Alexa have a lot of guardrails in place to ensure that misleading information is not propagated. 

Here are some examples of how LLMs and conversational AI are being used today:

  • LLMs are being used to generate text for articles, books, and even code. For example, OpenAI’s GPT-3 language model has been used to generate articles for The Guardian and The New York Times

  • Conversational AI is being used to create chatbots that can answer questions, provide customer service, and even generate creative content. For example, the chatbot Replika can hold conversations with humans, and the chatbot Bard can generate creative text formats.

In the future, it is likely that LLMs and conversational AI will continue to develop and overlap. It is possible that one technology will eventually supplant the other, but it is also possible that they will coexist and complement each other. Only time will tell how these two technologies will evolve and how they will be used to shape the future.

Generative AI In the VC world

Investments in the category of Generative AI have exploded while there has been a broader slowdown.

Source: Pitchbook and Fortune

Source: CB Insights

VI. Framework

DVC has created the following framework for Generative AI to help understand the various companies in the space and to better analyze opportunities in the space. 

Green – Open-source models/model-providers 

Note: The startup names under each category are by no means exhaustive. They are only used to illustrate examples. 

Glossary of terms

API 

APIs (Application Programming Interfaces) are software interfaces that allow communication between different software applications. They enable developers to use pre-built functions and services from other applications to enhance their own software, without having to develop everything from scratch. In the case of Generative AI, APIs allow developers to integrate pre-trained machine learning models into their own applications, without having to develop these models themselves.

Foundation Models 

Foundation models are very large pre-trained language models that can be used as a starting point for building more specialized AI models, including generative models. These models have been trained on massive amounts of data and have learned to generate text that is often very realistic and coherent. By using a foundation model as a starting point, developers can save time and resources when building their own generative models. The term came into existence when authors at the Stanford Institute for Human-Centered AI (HAI) came up with it in a 2021 paper

Vector Database 

Vector databases are a type of database that stores embeddings of data in the form of vectors. They’re particularly useful when data is non-relational – as is the case with most generative AI applications. Vectors are a mathematical representation of data that can be used to represent text, images, audio and other types of multimodal data. Vector databases can be used to store and retrieve data more efficiently than traditional databases, which can help to improve the performance of generative AI models. They are particularly useful in generative AI because they can help models understand relationships between different data points and generate more coherent and realistic output. 

Monitoring 

Model monitoring refers to the process of tracking and analyzing the performance of generative AI models over time. This includes monitoring factors such as accuracy, efficiency, and stability. By monitoring models regularly, developers can identify and address issues quickly, and ensure that their models continue to perform well over time.

Autonomous agents 

Autonomous agents are AI systems that can generate a systematic sequence of tasks until they satisfy a predetermined goal. These agents run on a loop to generate self-directed instructions and actions at each iteration. Their capacity to execute virtually any task independently, facilitated by chain-of-thought techniques and integration with external tools and APIs, distinguishes them remarkably. 

VII. Deep dive into the AI Ops and Tooling layer

As an Investment Strategy, DVC will focus on the AI Ops and Dev Tools layer. DVC believes this “Picks and Shovels” strategy best suits its strengths and track record as an infrastructure software investor.   

There are a multitude of tools from both startups as well as well-established large providers. We intend to use our expertise to identify specific opportunities within this category. We find it challenging to predict which application-level products will succeed in the market. It is more likely that incumbents who already have market share will incorporate Generative AI features into their products, further bolstering their market position.  

Foundation model (FM) providers require significant capital investments and are currently not feasible for venture capital investment by DVC. Moreover, the availability of numerous open source models and a recent Google internal memo highlighting that proprietary companies have no moat in Generative AI makes it difficult to identify which models are worth investing in.  

There are strong reasons why DVC focuses on the AI Ops and Tooling layer. 

  • AI Ops and tooling are essential for the successful deployment and operation of AI systems. AI systems are becoming increasingly complex, and it is difficult to manage them manually. AI Ops and tooling provide a way to automate and manage AI systems, which can save businesses time and money.
  • As more businesses adopt AI systems, there will be a growing demand for AI Ops and tooling solutions.
  • From the massive growth of vector databases (Pinecone raises series B) to frameworks to create LLMs like LangChain, AI Ops and tooling companies will be well-positioned to benefit from the growth of the AI market irrespective of which applications are successful.
Here’s a market map: AI Ops and Dev Tools. Further startups can be found on the Dealroom website.

To create a production-ready app, here are the tools, apps, and platforms one would generally use: 

First, 

  • Choose a programming language and framework: Python is the most popular language for AI. Other programming languages like Julia and R may be used, and a mixture of languages may be used when mobile applications are involved, such as Java, and C++ especially for models in production.
  • Cloud providers: One would normally use a cloud provider like Google Cloud Platform (GCP), Amazon Web Services (AWS) or Azure to host the AI model and app. Cloud providers offer a variety of features that can make it easier to deploy and manage an AI app, including scalability, reliability, and security.
  • Vector databases: Vector databases are designed to store and query large amounts of vector data, which is ideal for AI models. One could use a vector database like Pinecone to store AI model’s parameters. However, it is important to note that vector databases are not always the best choice for storing and querying structured data. For example, if the data is not high-dimensional, or if the queries are not complex, then a traditional relational database may be a better choice.

Then, 

  • Data collection: One can use a variety of data collection and preparation tools, such as Snorkel AI, Heartex or other image databases. Synthetic data providers (such as Rockfish.AI) could also be used to obtain realistic images for those cases where real-life data is hard to get, e.g. Diabetes radiographs. 
  • Training: One can then use a variety of AI training tools, such as custom-created models or call model APIs such OpenAI’s GPT-3, and Google AI’s Imagen, train the model and store trained vectors in the vector database.
  • Testing and validation: One can use a variety of AI testing and validation tools, such as Google’s AI Test Kitchen and OpenAI’s CLIP, to test and validate AI models to ensure that it is performing as expected.
  • Monitoring: One should use a variety of AI monitoring tools, such as Google’s AI Platform Monitoring and OpenAI’s Weights & Biases. These tools are used to monitor AI models in production to ensure that it is performing as expected and to identify any potential problems.
  • Deployment: One can use a variety of AI deployment tools, such as Google’s AI Platform and OpenAI’s API. These tools are used to deploy AI models to production environments. If not hosted on the cloud, people can use dedicated servers to handle increased traffic and for data security. Dedicated servers offer more control and flexibility than cloud-based solutions, but they can be more expensive. Individual servers are good options for AI apps that require a lot of resources or that need to be highly available.

In addition to these steps, one would need to consider the tradeoffs between the following when choosing a particular tool at a startup:

  • Cost: The cost of creating and deploying an AI app can vary depending on the tools and platforms used. It is important to factor in the cost of data collection, training, testing, validation, monitoring, and deployment when planning an AI project. 

  • Time: The time it takes to create and deploy an AI app can also vary depending on the complexity of the app and the tools and platforms used. It is important to factor in the time required for each step when planning an AI project. 

  • Skills: The skills required to create and deploy an AI app can vary depending on the tools and platforms used. It is important to have the necessary skills or switch to no-code AI platforms such as DataRobot, Clarifai AI Platform, etc. before starting an AI project.

End-to-end generative AI app creation platforms are changing the way current AI tooling and ops layer is thought of. They provide a cloud-based platform that allows developers to deploy their AI applications, making it possible for developers to scale their AI applications up or down as needed without having to worry about managing the underlying infrastructure. These platforms typically provide a graphical user interface (GUI) that allows developers to drag and drop components to create an AI application without having to write any code. Some of the leading end-to-end generative AI app creation platforms include: Google’s Gen App Builder, Azure OpenAI Service and Amazon BedRockLangChain provides end-to-end chains’ integration to make working with various programming languages, platforms, and data sources easier, thus helping create applications that are “data-aware” and “agentic” [Example

VIII. Responsible AI

Generative AI has the potential to transform many industries, but it also poses significant risks. One of the main concerns with Generative AI is the potential for bias and ethical issues. Without proper Governance, Risk, and Compliance (GRC) measures in place, companies using Generative AI could face legal and reputational damage. Along with the AI Ops and Development tools’ layer, DVC will look to prioritize investments in GRC within the Generative AI space.

Responsible AI is a set of principles and practices that guide the development and use of artificial intelligence (AI) in a way that is ethical, fair, and beneficial to society. There are many different aspects to Responsible AI, but some of the key principles from IBM and this blog include:

  • Fairness: AI systems should not discriminate against any individual or group of people. 

  • Transparency: AI systems should be transparent so that people can understand and explain how they work and arrive at certain decisions. Data, model and decision explainability are some categories that are covered in this principle.  

  • Accountability and Governance: AI systems should be accountable so that people can be held responsible for their actions. This becomes especially important when LLMs enter the realm of IP infringement from proprietary sources’ data. Especially important when agents start becoming more widely adopted.  

  • Privacy: AI systems should respect people’s privacy and incorporate methods to flag sensitive information.  

  • Robustness: Robustness involves detecting and diagnosing issues with LLMs, such as bias, drift, or model degradation, and taking corrective action to ensure that the LLMs are accurate, reliable, and aligned with the core principles of the organization.

Furthermore, the capacity for AI systems to behave unexpectedly is intrinsically woven into their complexity and is not solely triggered by external malicious factors. Similar to a software bug, these unforeseen behaviors exist independently of being exploited as vulnerabilities. The existence of these flaws within the system persists regardless of external influences. This highlights the importance of addressing these issues not only from the perspective of input prevention but also at the core of their engineering. It is crucial to establish robust internal measures that can identify and resolve these inherent irregularities, similar to the process of debugging in software development. Maintaining the integrity of the system requires a two-pronged strategy: preventing troublesome external inputs and continuous internal enhancement. 

Some companies in this category are HolisticCredoFiddlerTrueeraArthur etc. Umbrella organizations such as the Responsible AI Institute are focused on providing tools for organizations and AI practitioners to build, buy, and supply safe and trusted AI systems. 

Caveats to current LLMs and going forward

While it is exciting to witness the significant progress made by large language models (LLMs) and generative AI in general, there are some caveats that DVC considers before fully embracing these technologies. 

One of the most prominent concerns is the issue of bias. LLMs are trained on vast amounts of data, much of which is sourced from the internet. However, this data is often biased towards certain perspectives, cultures, and demographics, which can result in LLMs producing biased and potentially harmful outputs.  

Another concern is the potential for LLMs to be used maliciously. Some malicious attacks that can be performed on LLMs include prompt injection, where an adversary can prompt the LLM to produce malicious content or override the original instructions and the employed filtering schemes. Adversarial prompting is another type of attack that involves using clever prompts to hijack the model output. Membership inference attacks, training data leakage, and malicious queries are other types of attacks that can be performed on LLMs. These attacks can result in the exposure of sensitive information, modification of the database, and unauthorized requests. 

Additionally, there is the question of transparency and hallucination. The complexity of LLMs and generative AI means that it can be difficult to understand how these models arrive at their outputs. Hallucination refers to the phenomenon where a language model generates output that is not grounded in reality or lacks coherence with the input provided. In other words, the model generates text that is not factually accurate or contextually relevant and does not correspond to any real-world experience. 

Hallucinations can occur in LLMs due to a variety of reasons such as insufficient training data, biased training data, and model architecture limitations. These hallucinations can range from minor errors in syntax and grammar to completely nonsensical and absurd output.  

Policy regulators in various countries are revisiting their stance over privacy concerns of LLMs. Italy is the first country to have banned ChatGPT earlier in April and lifted the ban towards the end of the month. Despite these concerns, we believe that there is a bright future for LLMs and generative AI. Stanford’s recent AI Index Report 2023 highlights that policymakers’ interest in AI is on the rise and that AI is both helping and harming the environment. Microsoft’s recent publication posits that developments from GPT-4 could be signs of Artificial General Intelligence (AGI), a paradigm that refers to the ability of machines to perform any intellectual task that a human can, something that researchers thought was decades away from being achieved. AGI includes tasks that are currently considered to be too complex for machines, such as creativity, empathy, and common sense. 

IX. Other Future Opportunities?

LLM Management layer

Having consulted industry experts (such as Chief AI Officers) and different generative AI startup co-founders, DVC gathered that there is a vast services’ opportunity in the LLM Management layer – a layer that sits on top of the foundation model layer – to ensure safety, privacy and security of customer data. This layer would be responsible for providing wrapper or containerized services (and warnings) to sensitive information when end-users deal with APIs or while interacting with applications. Currently, third party APIs from core companies are extensively used by other companies and sufficient protection is not in place (Samsung workers unwittingly provided sensitive information to ChatGPT) as these applications constantly absorb and iterate over user queries.  

The LLM privacy management layer is responsible for ensuring that sensitive data is not leaked or misused by the LLM. This layer can include privacy-preserving techniques such as differential privacy, federated learning, and secure multi-party computation to protect the privacy of user data. The privacy management layer can also include mechanisms for data segregation and data domains to ensure that data is handled appropriately and in compliance with relevant regulations and policies. 

This layer could also be used for moderating biases and fact-checking against a host of reliable sources and flag any warnings. A recent paper by the Allen Institute showed the toxicity of ChatGPT’s word generations when it was assigned different user personas. There could be a vast potential here for identifying these inherent biases and prompt engineering and fine-training current foundation models with synthetic data from startups like Rockfish DataParallel Domain and Synthesis AI

Addressing security risks from the new Generative AI app development is becoming an important area of focus. GenAI Guardrails is an example of a LLMOps’ tool that provides technical integrations to enable governance teams to implement and configure I/O filters, privacy – and security-preserving measures, and other guardrails to prevent risks associated with generative AI, such as inaccurate or nonfactual outputs, IP infringement, and data leakage in model outputs. CalypsoAI‘s LLM Management tool – Moderator – is a model-agnostic approach to safeguarding data while deploying LLMs at scale and across the enterprise. HiddenLayer provides a platform to monitor, detect, and defend machine learning algorithms from adversarial attacks such as inference and poisoning attacks. As enterprise adoption of Generative AI becomes more widespread, a company’s algorithms and classified training sets are a source of its unique competitive advantage. Hence, productizing solutions for the next security frontier of AI would become increasingly important.

LLM Operations

Further opportunities would be in LLM Operations, tackling unique LLM challenges such as crafting effective prompts, sanitizing model outputs, and mitigating risks like biases, lack of transparency, and at the intersection of AI and Cybersecurity addressing prompt injection vulnerabilities, adversarial attacks and API-driven security vulnerabilities. 

Current foundation models are open domain models. They are provided by industry and academia after having been trained on a vast corpus of domain-agnostic text, and need to be finetuned for specific use-cases such as law, medicine, etc. As we move forwards, we will see a rise in fine-tuned domain-specific or task-specific models with company-specific models trained on proprietary data. A recent example of a domain-specific model is Google’s Med-PaLM 2.  

Additionally, multi-shot learning and zero-shot learning can be used to improve LLMs’ performance on tasks where there is limited labeled data or where the model needs to generalize to unseen classes. In multi-shot learning, the model is trained on a small number of examples from each class, typically more than one, but still significantly less than what is required for traditional supervised learning. In zero-shot learning, the model is trained on a set of seen classes and then tested on a set of unseen classes.  

Another captivating opportunity is in the domain of autonomous agents. As the complexity of the tasks, they’re tasked with escalates, their performance markedly deteriorates. Compounding the challenge is the lack of clear methodologies to decipher the reasons for their failure and to enhance their performance, aside from strengthening the underlying model. The lack of explainability in agents, particularly when interacting with critical APIs, poses serious risks. Their opaque decision-making processes can lead to unpredictable and potentially harmful outcomes in sensitive contexts, such as healthcare or finance.

From the cloud to the edge

Most AI training is presently conducted in the cloud due to the seemingly boundless availability of computing resources and the associated flexibility. Over time, with immense personalization and vast amounts of data to compute, DVC believes applications will run real-time on the edge post training in the cloud. Ambarella and Syntiant are some companies that have created SoCs designed to run generative AI models on the edge. Apple is rumored to be working on creating user specific trained LLMs that work on an iPhone while leveraging the cloud as needed.

Environmental concerns

University of Colorado Riverside and the University of Texas Arlington researchers have shared a paper titled “Making AI Less Thirsty” that looks into the environmental impact of AI training, which not only needs copious of electricity but also tons of water to cool the data centers. When looking into how much water is needed to cool the data processing centers employed, researchers found that training GPT-3 alone consumed a whopping 185,000 gallons of water — which is, per their calculations, equivalent to the amount of water needed to cool a nuclear reactor. 

The next state for large companies would be on Green Generative AI. As the world becomes more environmentally conscious, green AI is likely to become an essential focus for generative AI research and development. The future of Generative AI will need to prioritize sustainability to meet the needs of society without further damaging the planet.[Source

We would like to thank Vijay K NarayananAriel SchönKrishnaram KenthapadiDeepak Sekar and MuckAI Girish for their valuable inputs.

References

Leave a Comment

Your email address will not be published. Required fields are marked *

DVC is a team of passionate entrepreneurs on a mission to transform the startup journey.

CONNECT WITH US

HELPING STARTUPS TAKE THE LEAP OF GROWTH

Where is your company located