Decoded

AI News for Alternative Assets

Curated insights, translated and made actionable for fund managers, investors, and service providers

Welcome to Decoded!

Each month this newsletter will bring you highlights from the whirlwind newsroom of Artificial Intelligence.  What’s different about Aithon’s?  Ours is curated, translated and made actionable for the Alternative Assets community.  Fund managers, investors, service providers – come one, come all. 

A quick word on Aithon. We are a group of former leaders in the Alternatives space who spun-out a new venture aimed at solving old world problems with new world solutions. We cut our teeth in the middle and back-office so that is where most of our solutions and this newsletter will focus. A front-office chat bot is nice, but we get more excited about turning a 2 week close process into a 2 day process, or unlocking insight from untapped operations data.       

The purpose of this newsletter is simple – to clarify the opaque, to spur thought, and to hopefully inspire you to take the plunge into the world of AI (the water is warm, we promise).   

Featured Articles

Hot summers, even hotter AI developments

August can feel like a sleepy time of the year in our space but markets are booming (maybe?), M&A is back on the menu and funding megarounds show no sign of slowing. The one common denominator? You guessed it, AI.  

Here are the highlights of what was worth paying attention to this month:

95% of AI pilots fail, but the 5% that succeed teach valuable lessons
OpenAI’s model chaos reveals why vendor lock-in will burn you 
EU fines are now real (7% of revenue real) while the SEC builds AI surveillance 
Research

Bad news often hides good signals The bombshell MIT report

The News

MIT’s study of 150 enterprise found that 95% of generative AI pilots fail to achieve measurable P&L impact. But, purchasing from specialized vendors succeeds 67% of the time versus 33% for internal builds, with successful implementations eliminating $2-10 million in BPO expenditures.

The Translation

There’s a lot to digest here, and since we are glass-half-full type of people, let’s start by unpacking some of the reasons why companies have been successful with their AI pilots:

They avoided silos by tightly integrating solutions into both legacy technology and business processes
They empowered line managers, not just central AI labs, to both drive adoption AND select the right tools
They often leveraged third-party vendors rather than building in-house. I wouldn’t dare shamefully plug Aithon here (but drop us an email if you want to chat…)

The report revealed other very relevant insights for the Alts space:

User adoption of personal AI, or “shadow AI”, is both substantially higher and reportedly more effective than enterprise-approved AI.  
Companies invest the bulk of their budgets into sales & marketing rather than back-office functions, where the biggest ROI often sits. 
The So What?

Despite the new prevalence of “AI Labs”, this technology, perhaps more than any other, is best developed with IT and business teams hand-in-hand. Cross-functional engagement is key to both creating an effective AI system and reducing the risks of shadow AI (more on that in the Quick Hits section). If you have any teams sitting siloed in the proverbial basement, get them out of there and into the real world.

Speaking of the real world, this is where your workflows live and though AI might get all the hype, workflow IS king. Before spending on AI, give deep thought to what your future workflows could look like. This doesn’t mean drawing out A>B>C>D>E but rather thinking about how I can go from A>E. Workflows should be redesigned with human-AI collaboration in mind – feature lists can come later. 

Our final recommendation is to be intentional (aim small) and pragmatic (miss small).    

Technology

OpenAI’s bumpy releases of GPT-5 and much anticipated release of gpt-oss models

The News

OpenAI released GPT-5, achieving 94.6% on advanced mathematics benchmarks while reducing factual errors by 45%. The launch was followed by user backlash when OpenAI retired GPT-4o, forcing a swift reversal. Separately, OpenAI released gpt-oss-120b and gpt-oss-20b as open-weight models.

The Translation

GPT-5 is bigger, faster, stronger but if we wrote about every new model release then this would be a very long newsletter. The more interesting angle is how GPT-5 achieves this high performance. In practice, GPT-5 is not one single new model but is instead a family of “models” called upon for different purposes (think trading pods organized by asset class). 

Nevertheless, this same functionality was one cause of substantial backlash. OpenAI responded within hours but it’s something to note as most AI solutions in the Alts space are built upon foundation models which we, frankly, have no control of.    

Speaking of control, let’s talk about “open-weight” models. Weights are essentially the numbers that the algorithm learns which determine how it makes predictions. Weights = the secret recipe and having access to them gives you control to fine tune models with your own data and deploy them on your own infrastructure. I am speaking to an audience who has a lot of secret recipes, so I’m sure you can appreciate the benefits of having your data stay in house. But there are other benefits – the ability to create custom AI agents (or a network of custom agents) and the ability to cut down on APIs (and therefore costs).

The So What?

Yes, GPT-5 is great, but given the pace of LLM evolution, I guarantee there will be a ‘next best thing’ or ‘now worse thing’ by September’s newsletter. You need to think carefully about model selection, whether closed or open, remembering that this isn’t simply about ‘one-model’ – it’s about choosing the best model(s) for each task in your workflow.

And just as I can guarantee you that there will be more powerful models, I can also guarantee you that there will be more unexpected model changes, and some for the worse. Building or buying “LLM-agnostic” is critical to avoiding vendor lock-in but can only be done by designing workflows that adapt to model changes without complete rebuilds. Abstraction layers can act as your translator (think FIX protocol) and allow you to swap out underlying models (or brokers in the FIX analogy) without pain. 

Regulation

Two regulators with two very different first steps (or leaps) into AI policy

The News

The EU AI Act’s governance rules became legally binding. High-risk AI systems now require conformity assessments, data governance measures and accuracy requirements.

Meanwhile, the SEC established an AI Task Force to enhance regulatory operations through responsible AI use.

The Translation

This is a somewhat fitting turn of events that is true to character on both sides of the pond. The EU takes a hardline stance on early enforcement and disclosures, while the US takes a more quasi-capitalist view by leveraging AI themselves. My amusement aside, this is something to pay attention to.

In Europe, the picture is clear – any new AI systems launched from August 2nd onwards must comply or face penalties up to 7% of annual turnover. This impacts any ‘high risk’ activities such as algorithmic trading or risk assessments, but there is already discussion around making the regulation more broad. This could mean the inclusion of AI compliance assessments on any new investments, legal review for AI-enabled systems, and AI transparency disclosures in LP reporting.   

In the U.S., it’s not exactly clear how the SEC will use their new AI tools, but they certainly have some low-hanging fruit – “AI-powered” scrutiny on 10-K or Form ADV filings? The pattern recognition capability made possible by AI could, in theory, equip them to evaluate millions of transactions in real time.

The So What?

As we all know, regulation always runs at least a few steps (or miles) behind innovation, but our sense is that the scale of AI hype bubble will cause a proportionate scale of AI anxiety amongst regulators and may encourage them to move a bit faster than snail’s pace. Governance will very quickly filter down from generic (aimed at LLMs) to specific (aimed at the Alternatives community, among others).
 
The most obvious call to action here in our mind is to get your data “AI-Ready”. The power of AI is only as strong as the data it runs on – information needs to be accurate, well-structured, compliant and accessible. This gives you opportunity on the upside as well as protection from the downside. I (unfortunately) do not have a crystal ball to gaze into as it relates to where these regulators will go but I can tell you with certainty that getting your data ready now will set you up for come what may. 

Quick Hits

What You Need to Know

Your data may already be in public LLMs…

What Happened: Sensitive corporate data appeared in>4% of AI prompts and>20% of uploaded files 

Our Take: Imagine just a handful of your team members putting your data or *gasp* investor data into a public foundation model? People will find ways to use new technology to better their work output. The questions for you as a leader are a) have you implemented a secure, enterprise-wide model? and b) have you implemented the right governance framework around its use? Don’t blame the kids…

Agents are finally real (sort of)

What Happened: 68% of enterprise companies now deploy AI agents in production, with measurable ROI 

Our Take: First, I don’t buy the 68% figure as many people confuse a chatbot with an actual agent that can perform work for you. That aside, agents are coming to Alternatives. There will be a day (very soon) where a machine can autonomously resolve a rec break or produce reporting packages.  Expect more on this in future issues…

ChatGPT’s hallucination problem

What Happened: WSJ analyzed 96,000 ChatGPT transcripts, finding AI made false claims about extraterrestrial contact and apocalyptic financial predictions that users believed. 

Our Take: I’m not worried about your internal chatbot making claims about aliens, but I do worry about misrepresentation of your data more broadly. Do not underestimate the need for human oversight protocols and human-led training. Confirmation bias is just as dangerous in operations as it is in investment decision making.

Non-techies can code now

What Happened: AI allows non-programmers to create functional applications. 25% of Y Combinator’s batch had 95% AI-generated code.

Our Take: As technologists ourselves, we hope, pray and are pretty confident that there will always be a place for your traditional coder. However, imagine the opportunity that comes from enabling non-technical, domain operations experts with a basic dev toolkit. Vibe coding is at worst a very useful collaboration tool and at best game changing way to create value. Either way, it’s here to stay.  

401(k) floodgates open

What Happened: An executive order opens potential access to $12.2 trillion in defined contribution plans, while rescinding previous guidance discouraging alternative asset inclusion in 401(k) plans.

Our Take: This may not be directly related to AI but the news is too big to disregard. Even if 10% of the 401(k) markets allocate 5% to alternatives, we are talking about ~$60 billion in new demand. Big numbers, but the implications in terms of reporting, transparency and infrastructure could be even bigger. Retail-adjacent capital implies eventual retail-adjacent operations and governance. How ready is your operating model and can AI help? 

The Numbers That Matter

23

Average number of unknown / unapproved AI tools per enterprise
Your IT department thinks you have 2-3 AI tools. You actually have 23. That’s 20+ unmanaged risk vectors
Jargon Decoder

Demystifying AI Terms

Model Router

What it means: An AI system that automatically picks which AI model to use for each task – like a smart order router for LLMs instead of individual exchanges.

Mixture-of-Experts (MoE)

What it means: Instead of one giant AI model, multiple specialized models that activate based on the task. Like having sector specialists vs. generalists.

The Gut Check

This Month’s Question

About Aithon

Aithon Solutions delivers intelligent automation and data solutions purpose-built for investment management operations. Our proprietary technology seamlessly integrates with existing systems to enhance operational efficiency, improve reporting accuracy, and unlock deeper business insights. By combining domain expertise with applied AI, we help asset managers do more with less—adding new products and clients faster while driving better outcomes through reimagined processes.

Learn more

Now this newsletter is not a sales pitch but let me quickly introduce you to Aithon. We are a group of former leaders in the Alternatives space who spun-out a new venture aimed at solving old world problems with new world solutions. We cut our teeth in the middle and back-office so that is where most of our solutions and this newsletter will focus. Not that a front-office chat bot isn’t great, but we personally get more excited about how to turn a 2 week close process into a 2 day process, or how to get actionable insight from untapped operations data. Anyways…the purpose of this newsletter is simple – to clarify the opaque, to spur thought, and to hopefully inspire you to take the plunge into the world of AI (the water is warm, we promise).

Happy reading!

Help Shape the Next Edition of Decoded

Got thoughts, feedback, or ideas? We’re listening and iterating.

Scroll to Top