BloombergGPT hints at new possibilities of AI in work | WARC | The Feed
You didn’t return any results. Please clear your filters.

BloombergGPT hints at new possibilities of AI in work
The current ‘AI’ tools that have entered the mainstream are largely a one-size-fits-all kind of deal, but Bloomberg is one of the companies that has begun experimenting on the deployment of large language models on their own corpus of knowledge with a single portal to explore it.
Why it matters
Bloomberg is apparently an early mover in domain-specific deployment of large language models (LLMs), the technology behind OpenAI’s ChatGPT and Google’s Bard. The big difference is that this is not trained on the web in totality but a more carefully curated training set of its own data as well as a further curated trove of web content.
What’s interesting about it is how it drags the developing story of generative AI away from a handful of colossal tech companies by going deep on a specific topic and specific audience. Brands and agencies tend to have their own knowledge sets and might find LLMs useful for organising them. In doing so, Bloomberg has got ahead of a much wider commercial opportunity that major tech providers are now starting to talk about.
How this plays out is less clear, however, as this could either feed into an internal tool to speed up human writers or analysts, or it might even become part of the Bloomberg Terminal user experience.
What is it?
BloombergGPT, as it is called in the research paper, is a “large language model (LLM) [that] has been specifically trained on a wide range of financial data to support a diverse set of natural language processing (NLP) tasks within the financial industry”.
Such tasks include: “sentiment analysis, named entity recognition, news classification, and question answering, among others”. According to the submitted results, its finance-specific large-language model showed similar general performance (such as reading comprehension or linguistic scenarios) to bigger general-purpose language models like GPT-3, while excelling at finance-specific tasks.
Ingredients
Clean data is at the core of the project, with the company’s archive of news and financial documents a large and clean dataset on which to train a Bloomberg-specific tool, known as “FinPile”.
Fifty-two percent of the data in the model comes from Bloomberg, with the rest coming from curated English language news, Wikipedia, YouTube captions and others, otherwise known as “The Pile”.
What it tells us
The story suggests you don’t need to plough billions of dollars and hire huge numbers of people to train a language model on a bespoke dataset (though it helps if that data is clean in the first place).
The news has implications for any brands, agencies or platforms with large databases or archives that could be accessed through natural language instead of relying only on highly trained individuals to look up complex queries. In other domains, practitioners are speculating that it could help to find efficiencies across complex systems, such as in cross-media carbon tracking.
It’s also a slightly more sophisticated take on the idea of generative AI, moving it away from a simple job stealer to a tool that can take some of the harder, less productive yards out of the work of experts, freeing them up to do fresh analysis and reporting rather than looking back through the archive for context.
Sourced from Bloomberg, WSJ, WARC
[Image: Bloomberg’s London office]
Email this content