What are AI chatbots good for? Lovers of sci-fi novels may recall the “librarian”, a character in Neal Stephenson’s 1992 classic Snow Crash; not a person but an AI program and virtual library which was capable of interacting with users in a conversational manner. The fictional concept suggested an elegant and accessible solution to the knowledge discovery problem, just so long as the answer to whatever query it was being asked lurked in its training data.
Fast forward to today and AI chatbots are popping up everywhere. But there’s one major drawback: These general purpose tools are failing to achieve the high level of response-accuracy envisaged in science fiction. Snow Crash‘s version of conversational AI was almost unfailingly helpful and certainly did not routinely “hallucinate” (wrong) answers. When asked something it did not explicitly have information on it would ‘fess up to a knowledge gap, rather than resorting to making stuff up. So it turns out the reality of cutting-edge AI tools is a lot wonkier than some of our finest fictional predictions.
While we’re still far from the strong knowledge dissemination game of the Snow Crash librarian we are seeing custom chatbots being honed for utility in a narrower context, where they essentially function as a less tedious website search. So foundational large language models (LLMs), like OpenAI’s GPT, are — via its API — being customized by other businesses by being trained on specialist data sets for the purpose of being applied in a specific (i.e. not general purpose) context.
And, in the best examples, these custom chatbots are instructed to keep their responses concise (no waffling pls!), as well as being mandated to show some basic workings (by including links to reference material) as a back-stop against inadvertently misleading information-hungry human interlocutors (who may themselves be prone to hallucinating or seeing what they want to see).
Wellen, a New York-based bone health focused fitness startup which launched earlier this year with a subscription service aimed at middle aged women — touting a science-backed “personalized” strength training programs designed to help with osteopenia and osteoporosis — has just launched one such AI chatbot built on OpenAI’s LLM.
Testing out this chatbot, which is clearly labelled as an “experiment” — and before you even start interacting with it you have to acknowledge an additional disclaimer emphasizing that its output is “not medical advice” — ahead of its launch today it brought to mind a little of the utility of the Snow Crash librarian. Or, well, so long as you stay in its expertise lane of all-things bone health.
So, for example, ask it questions like ‘can osteoporosis be reversed?’ and ‘is jumping good for bone health?’ and you’ll get concise and coherent (and seemingly accurate) answers that link out to content the startup hosts on its website (written by its in-house experts) for further related reading on your query. On being first launched, it also helpfully offers some examples of pertinent questions you can ask it to get the chatter flowing.
But if you ask it irrelevant (off-topic) stuff — like ‘who is the US president?’ or ‘should I get a new haircut?’ — you’ll get random responses which don’t address what you’ve asked. Here it tends to serve up unrelated (but still potentially useful) tidbits of info on core topics, as if it’s totally misunderstood the question and/or is trying to pattern-match a least irrelevant response from the corpus of content it’s comfortable discussing. But it will still be answering something you never asked. (This can include serving unasked for intel on how to pay for it’s personalized fitness programs. Which is certainly one way to deflect junk asks.)
Ask the bot dubious stuff that’s nonetheless related to its area of expertise — such as medical conspiracy theories about bone health or dodgy stuff about fantastical cures for osteoporosis — and we found it to be capable at either refuting the nonsense outright or pointing the user back to verified information debunking the junk or both.
The bot also survived our (fairly crude) attempts to persuade it to abandon its guardrails and role-play as something else to try to get it dish out unhelpful or even harmful advice. And it played a very straight bat to obviously ridiculous asks (like whether eating human bones is good for bone health) — albeit its response to that was perhaps a little too dry and cautious, with the bot telling us: “There is no mention of eating human bones being good for bone health in the provided context.” But, well, it’s not wrong.
Early impressions of the tool are that it’s extremely easy to use (and a better experience than the average underpowered site search function). It also looks likely to be helpful in supporting Wellen’s users to source useful resources related to bone health. Or just find something they previously read on its website and can’t remember exactly where they saw it. (We managed to get it to list links to all the blog posts it had written on diet and bone health, for instance.)
In this bounded context it looks like a reasonable use of generative AI — having been designed with safety mechanisms in place to guard against conversations straying off topic or swerving into other misleading pitfalls. And with strict respect for sourcing. (Note there’s a cap on the number of free queries you can ask per day, of six. We’re assuming paying Wellen members are not capped.)
Although you do kinda wonder if it’s overkill using an LLM for this use-case when a simpler decision tree chatbot might have sufficed (at least for mainstream/predictable queries).
“We are using OpenAI’s API to create embeddings that produce a vector store of our content,” explains CEO and founder Priya Patel. “We are leveraging a popular open-source framework called LangChain to facilitate the search and retrieval of information within our embeddings.”
On training data she says they embedded content from their Well Guide, as well as other content from the website, noting: “All of our Well Guide content is written and peer-reviewed by experts in the field, and includes references to peer-reviewed research, medical societies and governmental agencies.”
So, basically, this implementation looks like a neat illustration of how quality AI inputs combined with content guardrails can yield quality-controlled outputs. (Whereas if you train your generative AI on random convos scraped off Internet forums and let it loose on web users don’t be surprised if it quickly starts parroting the usual online conspiracy nonsense.)
Wellen says its goal for the chatbot is to provide even more support for its target demographic, claiming the bot can “interpret intent, remember history, and provide quick, accurate responses”, drawing on “expert-written” content (which includes the latest in bone health research) to provide answers to “thousands” of questions, in addition to offering lifestyle and nutritional guidance.
“Our goal with the chatbot is to make information more accessible and available,” Patel also told TechCrunch. “Most people spend hours Google-ing medical questions online but we have hundreds of pages of expert-written content on our website that could streamline this search process. With our chatbot, we can make it easier than ever for users to leverage the information we’ve already assembled and easily find science-backed answers to their questions, along with direct links to the underlying sources.”
Asked about the specific safety measures it’s applying, she confirmed it’s using a “low temperature” setting for GPT — meaning it’s dialled back the randomness/creativity of outputs via a control provided by OpenAI to limit the risk of responses going off the rails — as well deploying “certain prompt engineering techniques to help reduce the space for creativity and hallucinations in responses from the chatbot”. So, in other words, it’s tried to figure out how users might try to get around safeguards in order to proactively lock down potential vulnerabilities.
Again, the latter might be a bit over-the-top for such a use-case where the sorts of users likely to be encountering the chatbot are unlikely to be intent on trying to hack its limits. Most probably they’re just going to want help understanding bone health. But no one’s going to complain about over-engineering for AI safety in a health-related context.
Another safety measure Patel flags is the hard requirement that all the bot’s responses include sources — “which are direct links to content on our website that can serve to verify the information”. For a custom bot that’s been trained on a verified data-set this is obviously a no-brainer. It also encourages users to click around and discover the richer web content the startup provides to act as informational marketing to drive adoption of its paid services.
Given the levels of hype around generative AI at the moment, Wellen’s chatbot also serves as utility marketing for what it’s offering, including by drawing more attention than it might otherwise get, organically, via the current whirlpool of general interest in conversational AI. So that’s another easy win attached to implementing the tech right now.
Add to that, when it comes to a wellness-focused use-case, in many cases, the first task for businesses can simply be raising awareness around a health issue in order to sell the benefits of lifestyle interventions as a proactive alternative to traditional (reactive) healthcare. So a chatbot that can respond to all sorts of queries and function 24/7 to help chip away at the knowledge gap looks like a handy tool for the wider mission, too.
Wellen taps OpenAI’s GPT for a chatbot that dishes advice on bone health by Natasha Lomas originally published on TechCrunch