Decoded: AI News for Alternative Assets
Integration is the new intelligence
In October, the race for bigger models gave way to a race for better integration. Mainstream headlines may still chase compute and capex, but the real story was in how intelligence is being deployed, governed, and monetized. Enterprise platforms declared war, fine-tuning became semi-practical, and persistence moved from concept to implementation.
Here’s what we at Aithon found interesting this month:

A new era of platform wars – the “all-in-one vs. best-of-breed” dilemma returns
The News
Google launched Gemini Enterprise while AWS unveiled Quick Suite on the same day, both targeting enterprise AI deployment with security, compliance and integration capabilities. Both join the accelerating arms race to create the “do everything” AI platform which customers will be locked in to indefinitely.
The Translation
Google Gemini Enterprise provides unified access to all Gemini models through a single API with deep Google Workspace integration, enterprise compliance certifications, and the ability to train on company data.
AWS Quick Suite offers access to multiple AI models, integration with 200+ AWS services, private deployment options, and built-in model selection functionality to optimize costs.
The key difference: Google pushes simplicity and workspace integration while AWS offers flexibility and model choice. Google wants to be your AI operating system; AWS wants to be your AI infrastructure. Neither particularly understands what a waterfall calculation is or how to process a swap reset.
Here’s where it gets interesting for alternatives. These platforms are built for horizontal scale – companies that need AI for customer service, marketing automation, and supply chain optimization. They’re not thinking about side letter management, LP reporting nuances, or the specific madness of a month-end close.
The So What?
The enterprise pitch is compelling: single vendor, unified security, one ‘throat to choke’ when things go wrong. Your IT team will sleep better, your procurement team will write fewer checks, and your CISO might actually smile.
The other path is assembling your own Avengers team of specialized AI vendors. Use the document analysis company that actually understands subscription docs. Use the portfolio monitoring platform that knows what MOIC means. The compliance tool that speaks ILPA fluently. Yes, you’ll have more vendors to manage and more integrations to maintain…but you’ll also have tools that actually understand your business.
Go enterprise and you’re betting that general-purpose AI will eventually learn your domain. Go specialized and you’re betting that alternatives are different enough to warrant purpose-built solutions.
Our take? Domain expertise is paramount. The vendor that understands carry calculations will beat the vendor with the prettier dashboard every time. But you need the operational capacity to manage multiple specialized vendors. If you’re a $50B+ fund with a proper IT organization, cherry-pick the best specialized tools. If you’re smaller, consider using an enterprise platform for general productivity while carefully selecting one or two alternatives-specific vendors for critical operations.
The wrong answer is trying to do both badly. Pick a lane, commit to it, and build the operational infrastructure to support it. The AI decisions you make today will compound exponentially over the next five years.

Fine-tuning becomes accessible(ish)
The News
Thinking Machine Labs, founded by former OpenAI CTO Mira Murati, launched Tinker – a Python API that simplifies fine-tuning of large language models. The company raised $2 billion at a $12 billion valuation and promises to make custom AI accessible.
The Translation
Tinker is a Python API that reduces fine-tuning complexity from hundreds of lines of code to about 10-20 lines, automatically handling infrastructure, resource allocation, and error recovery. It supports Meta’s Llama and Alibaba’s Qwen models, reduces compute costs, and includes a “Cookbook” with pre-built examples for common workflows.
Let’s be clear about what “accessible” means here. Before Tinker, fine-tuning required a PhD in machine learning, $500K in GPUs, and the patience of a saint. Now it just requires Python programming skills, clean training data, and the ability to debug when your model starts hallucinating about portfolio returns.
The promise is compelling: train AI on your decade of investment memos, due diligence reports, and LP communications to create a model that actually understands how you operate. Imagine an AI that knows your investment criteria, recognizes your red flags, and writes in your voice. Early adopters have shown impressive results.
But here’s the reality check. You still need to aggregate and clean 5-10 years of documents (good luck with those scanned PDFs from 2015). You need developers who understand both Python and your business logic. Importantly, you also need to establish metrics to measure whether your fine-tuned model is actually better than GPT-5 with good prompting.
The So What?
Fine-tuning is becoming democratized, but it’s still not democratic. This is accessible like a Porsche is accessible – technically available to more people than a Formula 1 car, but still requiring significant resources and expertise.
The real question isn’t whether you can fine-tune, but whether you should. For narrow, well-defined use cases where you have clean data and clear success metrics? Maybe. For broad operational automation where requirements change monthly? Probably not.
If you’re considering fine-tuning, start by working with a hands-on execution partner who’s done this before. They can help you understand whether your use case actually benefits from fine-tuning versus good prompt engineering, whether your data is sufficient, and whether the ROI justifies the effort. The last thing you need is a $250k Python project that produces a model which is 2% better than ChatGPT at understanding your financial statements.
Remember that the goal isn’t to have the most sophisticated AI setup. It is to have the setup that actually improves your operations. Sometimes that’s a fine-tuned model. Sometimes it’s just better prompts. And sometimes…it’s just better processes.
Quick Hits
What You Need to Know
Will AI mark the end of Excel? …NOPE
What Happened: Anthropic launched “Claude for Excel” integration, joining a host of other companies seeking to augment Excel with AI.
Our Take: If you reflect for a moment, the spread of AI is in many ways following the same path as the spread of Excel. Both started as “neutral” tools to supplement core work, both quickly became a highly personalized vault for intellectual property, and both eventually became the cause of headaches amongst CTOs (i.e. shadow AI). Now both will… work together? Firms should get their governance in order as AI has substantially more potential for damage than John’s macro breaking while he’s on holiday.
Central banks hop on the AI train
What Happened: Bank for International Settlements (BIS) revealed several regulatory bodies are already using AI for market surveillance, economic forecasting and supervisory functions.
Our Take: To trap a fox, you must outfox the fox. I am half joking, but you should be 100% serious when looking at your compliance workflows. AML, KYC, transaction monitoring and fraud detection are all prime, rule-based use cases for AI which should be looked at with priority. We at Aithon love rule-based work because it can be automated and streamlined. The regulators love it for a different reason.
The AI payments (r)evolution continues
What Happened: OpenAI launched instant checkout allowing purchases directly within ChatGPT conversations.
Our Take: Remember last month when we spoke about payments protocols for agents? Well now here is one in action. Commerce infrastructure is rebuilding around agentic AI – the question is how fast will this come to institutional finance? Consumers will put convenience first but it is the audit trails and controls which will take priority in our space.
The plumbers of AI get a big payday
What Happened: n8n raised funding at a $2.5 billion valuation and revealed that 80% of their workflows now have embedded AI agents.
Our Take: For those who aren’t in the know, n8n is a workflow automation platform which helps users connect apps, APIs, data and now AI agents without heavy coding. These kinds of orchestration platforms are the unsexy infrastructure you will need to, for example: connect your GenAI document processing system to your GL, or pull portfolio company data for your new monitoring agent, or push output from your reporting agent downstream. Don’t sleep on the make-or-break workflow aspects of AI transformation.
Persistence is power
What Happened: OpenAI launched ChatGPT Knowledge allowing organizations to upload internal documents, policies, and proprietary data directly into ChatGPT.
Our Take: Without persistence and context, your AI solution will forget yesterday’s insight before today’s market open. They are undoubtedly two of the most severe blockers to a successful AI implementation. GPT Knowledge is one way of solving for this, but only if your data is AI-ready. That means clean, structured, properly tagged, and free of the errors you’ve been ignoring for years. Before you upload your fund’s entire history, ask yourself: is your data ready to be the foundation of your AI strategy?
The Numbers That Matter
43%
Of enterprise AI projects stuck in “partially deployed” with unclear ownership between IT and business teams
AI doesn’t fail because it’s dumb – it fails because no one owns it
Jargon Decoder
Demystifying AI Terms
Fine-Tuning vs. Retrieval Augmented Generation (RAG)
Fine-tuning permanently changes an AI model’s behavior through training while RAG gives AI access to your documents without changing the model.
Enterprise AI Bundling
Paying for 50 AI features when you only need 5. The AI equivalent of cable TV packages.
The Gut Check
This Month’s Question
About Aithon
Aithon Solutions delivers intelligent automation and data solutions purpose-built for investment management operations. Our proprietary technology seamlessly integrates with existing systems to enhance operational efficiency, improve reporting accuracy, and unlock deeper business insights. By combining domain expertise with applied AI, we help asset managers do more with less-adding new products and clients faster while driving better outcomes through reimagined processes.
Learn more
Now this newsletter is not a sales pitch but let me quickly introduce you to Aithon. We are a group of former leaders in the Alternatives space who spun-out a new venture aimed at solving old world problems with new world solutions. We cut our teeth in the middle and back-office so that is where most of our solutions and this newsletter will focus. Not that a front-office chat bot isn’t great, but we personally get more excited about how to turn a 2 week close process into a 2 day process, or how to get actionable insight from untapped operations data. Anyways…the purpose of this newsletter is simple – to clarify the opaque, to spur thought, and to hopefully inspire you to take the plunge into the world of AI (the water is warm, we promise).
Happy reading!
Help Shape the Next Edition of Decoded
Got thoughts, feedback, or ideas? We’re listening and iterating.


