Artificial intelligence is the future of enterprise computing but it brings with it a range of legal questions. Learn how VMware Private AI could be the solution.
These days, the cloud world is abuzz with three hot topics: computing at the edge, multi-cloud environments and artificial intelligence (AI).
AI has gone from being the stuff of sci-fi to deployable software to a household name in what feels like the blink of an eyelid. Generative language models like ChatGTP are capturing the conversation with the advantages and risks they bring.
In the world of enterprise, generative AI has huge appeal. It promises to bring accuracy and cost efficiencies to a huge array of activities, as well as actionable insights into customer behaviour.
The most commonly voiced concern is that it's another wave of automation taking away human jobs – but even if you're fully behind the AI revolution, there's another problem. How can you deploy AI without creating legal issues for yourself and others?
One answer to this question is VMware Private AI. But before we take a look at this new offering from virtualisation veterans VMware, let's quickly recap the sometimes uneasy relationship between AI and the law.
AI and the law
There are three main legal problems posed by AI in its current incarnations – and as AI develops, so too will the legal challenges.
First, there's the question of intellectual property and copyright concerns. Say you use AI to help write marketing copy. In the process, it incorporates a chunk of copyright-protected text.
No one wants their blurb to incite a legal challenge – but in the age of AI, it could be out of your hands.
Second, there's the security of personal data. Any AI system handling personal data is at risk of causing data leaks and privacy breaches. These aren't only bad because they cause downtime and compromise customer trust – they can also lead to a fine under data protection regulations.
Third (and relatedly), AI runs the risk of violating contracts if it uses customer data.
Finally, it risks violating trust. Customers want to know if the chatbot they're talking to is human or not. Transparency is needed to avoid consumer law challenges.
Surely, you might think, the law has got a grip on all of this? Well, yes and no.
AI is evolving at such a clip that legal frameworks are lagging behind. This means companies can't realistically wait for 100% clarity before adopting AI models.
What's the answer? This is a question that corporate lawyers in every industry are grappling with – and which is stopping AI from being used at its full potential.
VMware has long been on the cutting edge – so it's no surprise that it's come up with a solution to this thorny issue in the form of Private AI.
This is an architectural approach that balances the business case for AI with practical questions relating to privacy and compliance. It does this by bringing compute capacity and AI models to where the data is – and keeps them all away from the public eye.
For this reason, it could be said that Private AI is paving the way for a secure AI-driven future.
What is Private AI?
Private AI was unveiled by VMware at the 2023 Explore event in Las Vegas – the first of several conferences where VMware experts and clients mingle and learn the latest.
Its big sell is this. Private AI enables businesses to create generative AI in their data centres without relying on the cloud. This makes the process private in a way that's hitherto not been possible.
Private AI brings those language learning models to where the data is being generated, processed and put to work.
In this way, VMware is seeking to strike a balance between innovation and security – one that has been crucial to the fourth industrial revolution from the cloud to the Internet of Things.
Given that AI is likely to become an integral part of the way enterprises work, it can't be the case that CEOs have to choose between innovation and security. Both need to be readily available and deployable. That's where Private AI comes in.
VMware has also announced a new partnership with Nvidia, which is a world leader in AI computing.
The fruit of their teamwork is the VMware Private AI Foundation with Nvidia, set to drop at the start of 2024. Where Private AI is all about bringing AI to private data centres, the collaboration with Nvidia is a set of AI tools that enables you to run models trained on your private data. These models can be deployed in a public cloud, private data centre or at the edge.
All of this serves a wider purpose – of protecting corporate data and intellectual property when using generative AI. This way, enterprises won't have to sacrifice security for the sake of innovation.
What are the underlying principles of Private AI?
According to VMware, there are three core principles underlying Private AI.
- It's highly distributed. Disparate data locales can all be governed from a single central point.
- It prioritises data privacy and control. An organisation's data can only be used to train commercial models with its consent.
- It harnesses access controls and audit logs to ensure compliance.
Conclusion
Generative AI promises a lot – and many enterprises are hurrying to get on board, sold on the cost savings, generative text and actionable insights it can provide.
But while this is happening, legal teams across the world are biting their nails. The challenge is to find a way of utilising AI in an enterprise setting without putting data at risk and having to deal with legal challenges, regulatory fines or other compliance issues.
VMware's Private AI – and Private AI Foundation with Nvidia – is attempting to square this circle and remove the need for compromise. With Private AI, the question is no longer "AI or security?" but rather "When do we start?"
Based in Cork, Ascend provides expert
cloud solutions in Ireland
and worldwide. Are you looking for a cloud consultant?
Contact Ascend today
for a no-obligation consultation.