Tips For Boosting Product Features With Generative AI

Tips For Boosting Product Features With Generative AI

Posted in

AI is playing a bigger role in software development, from AI-generated code through to AI tools that consume and execute APIs independently. And lately, generative AI has become increasingly embedded into consumer products as well. As former Gartner analyst Paul Dumas told us, “Probably next year, or certainly the year after, at least 70% of any software product you touch is going to have an AI component to itself.”

If that’s the case, and it’s looking more and more like it will, API developers will soon have to prepare for additional waves of security measures, compliance, and governance. The sooner we can start doing that, the better… even if best practices are still evolving rapidly.

At our 2024 Austin API Summit, Ruben Sitbon, Lead Solutions Architect at Sipios, a FinTech agency, joined us to talk about boosting product features using generative AI. He also covered the API developer’s role in AI projects and how to bundle security and compliance into the equation.

Below, we’ll look more closely at a framework suggested by Sitbon. We’ll highlight some actionable tips and consider what the future of this space might look like.

Watch Ruben Sitbon present at the Austin API Summit:

Don’t Reinvent The Wheel: Reuse, Integrate, Incorporate

In 2023, the discussion of gen AI centered around chatbots like Jasper, Prompt Genie, and Mobile-GPT, says Sitbon. “We’ve seen only a few game-changing products: ChatGPT, Perplexity.ai for developers, Claude, and so on,” he says.

We’ve also witnessed some of the first failures of gen AI in production. For example, Air Canada was forced to match a discount promised by a chatbot, despite the company arguing (unsuccessfully) that their chatbot was a separate legal entity. This example speaks to the fact that many AI tools operate in silos, independent of connections to optimal information. As such, companies need to be extremely careful when boosting product features with AI.

Where possible, Sitbon suggests, we should leverage existing products or tried and tested services instead of spending time and money building new ones. When it comes to leveraging existing products, the potential for using APIs is obvious.

Undertaking The Proper AI Strategy

In his talk, Sitbon describes clients coming to him and saying things like, “We want to put GenAI everywhere in our product!” Lots of startups are already building products with Gen AI at their core, and established players are also rushing to expand what their products can do using AI.

But as we’ve seen above, and elsewhere, this isn’t something that should be undertaken lightly. Sitbon suggests the following four-step process for assessing suitability and implementing AI:

1. Define the dimensions needed to solve the problem(s)
2. Segment with your team’s skill set
3. Position where tools fit on that chart
4. Chart the relevant area for each tool

The greater the output customization required, the more technical expertise in AI, LLMs, and prompt engineering is required. Similarly, a greater range of required knowledge of certain fields, as well as access to private documents, will also impact the skill sets and privileges needed.

So, Sitbon asks, who can bring together the skill sets of Product teams, AI engineers, and Ops?

He suggests that, in many cases, it’s the API software engineer. API developers can facilitate product design by supplying the right tools, helping technical teams assess the security of product ideas, and developing scalable gen AI implementation with frameworks like LangChain. More on the latter below…

Monitor, Observe, And Test To Reduce Threats

Although boosting products using AI might seem intimidating, much of it comes down to leveraging existing best practices. “We already know how to manage SQL injection and DDoS attacks,” Sitbon says. “And it’s pretty much the same for large language models — there’s prompt injection, as well as EDoS (economic denial of sustainability) attacks.”

To mitigate security concerns, we can increase system observability using tools like Fiddler, a specialist tool designed specifically for enterprise AI observability, he says. We can also build custom middleware, monitor both the input and output, and so on.

Also, Sitbon asserts that we should be testing a lot and limiting our attack surface by using API-first architecture. For instance, run manual tests. Sitbon suggests enlisting your ten smartest friends to try to break your system! Regular integration testing can look out for prompts that don’t work correctly or could be exploited.

Of course, additional AI-specific threats are bound to emerge in the future. Unfortunately, as Sitbon remarks, there’s no “secret sauce” when it comes to preventing attacks on APIs or AI beyond taking all of the necessary precautions and remaining as vigilant as possible.

Make Sure You Can Scale…

What implementing gen AI actually looks like will vary considerably based on things like the complexity of the product, input formats, and desired outputs. Consider, for example, the difference between a zero-shot prompt, a prompt with no additional context, and few-shot prompts that provide additional contextual examples.

Sitbon goes on to discuss some of the different architectural levels of deploying gen AI that you might encounter, depending on the complexity of the implementation, and illustrates how a framework can help address each of these different levels.

He uses the example of LangChain, which is an open-source orchestration framework that offers a generic interface for almost any LLM and is available in both Python and JavaScript-based libraries. As you can see below, it has tools for projects of all complexities:

For improving observability, for example, Sitbon highlights that there are open source products like LangSmith available. This SaaS from LangChain allows you to replay prompts, debug new chains or agents, run regression tests, and so on.

Don’t Get Left Behind…Or Rush In

It’s worth remembering that, relatively speaking, we’re still in the early days of using AI to boost product features. With that said, startups and innovative SaaS players are already launching products with AI at their core, or integrating its functionality using extensions or plugins.

As time passes, bigger (and more slow-moving) players will be looking to catch up. As we know, however, large corporate entities are typically risk-averse and prioritize security and compliance above all else. Moving too fast, as we’ve seen above, can cause major headaches for companies.

But the adoption of AI is unlikely to be slowing down any time soon and, while we’ll undoubtedly see more open-source projects and de facto standardization, there’s still a ton of untapped potential for companies looking to capitalize on the AI boom.

In other words, as time goes by, we expect to see more frameworks and tools like LangChain emerge. And potentially, just like in any boom period, there could be a lot of them…

If you’re looking to embrace gen AI, no matter how tempting it might be to jump in headfirst, a measured approach is always the right one. Be sure to watch this space for more emerging best practices and case studies on how AI is changing the consumption and development of APIs.