How are large professional services organisation deploying GPT4 (or similar LLMs) within their own organisation for internal use, particularly when using sensitive/confidential/PII data? For example we're a UK based accountancy firm so have created our own internal app based on Microsoft's Open AI GPT4 model deployed to our private Azure hosting. We have built a custom app designed to "preprogramme" personas, i.e. pre-created prompts, for different areas of the profession, so internal users can have GPT4 act in different ways without prior knowledge of prompt engineering. I would be interested to see how others are building or buying solutions, particularly those based in EU/UK who can't use OpenAI/ChatGPT for with customer/confidential information.

4.2k views3 Upvotes3 Comments
Sort By:
Oldest
CEO in Services (non-Government)a year ago
One of the answers is to use Microsoft Azure GPT-4 if we want GPT-4 kind of performance. Another option is to host open source models such as LLama2 in the cloud/in-premise environment and serve it for inference or consumption purposes. We see such requirements coming in from multiple clients. For one of the India Govt client for developer experience, we have hosted a code LLM and built visual studio plugin for developer to consume it. For a Italian financial client we are about to make this happen. This involve 1 time hardware cost if the client is looking for on-premise solution, for cloud based there will be recurrent GPU usage cost based on the number of requests that needs to be served.
Director of Other in Softwarea year ago
We are building solutions and enable our customers to bring their own LLMs or integrate with any open and closed-source LLMs. In case like yours, we encourage our customers to integrate with Azure OpenAI to take advantage of Azure's enterprise security. 
1
lock icon

Please join or sign in to view more content.

By joining the Peer Community, you'll get:

  • Peer Discussions and Polls
  • One-Minute Insights
  • Connect with like-minded individuals
Global Head of AI, Data & Analytics in Softwarea year ago
Assume this question is a bit older, but you did outline a standard approach, host / access a model as a service internally through Azure OpenAI, Azure Machine Learning / VM, or Amazon Sagemaker, Amazon Bedrock
Though now Enterprise ChatGPT will make things easier if you don't care about federating models

Content you might like

18 views

TCO19%

Pricing26%

Integrations21%

Alignment with Cloud Provider7%

Security10%

Alignment with Existing IT Skills4%

Product / Feature Set7%

Vendor Relationship / Reputation

Other (comment)

View Results
5.7k views3 Upvotes1 Comment

Human Factors (fears, mental health, physical spacing)85%

Technical / IT Factors (on-premise tools, pivoting back away from remote)14%

3.7k views3 Upvotes2 Comments
IT Manager in Constructiona month ago
Hello,
the topic is so broad, what are you focused on?
Read More Comments
4.8k views2 Upvotes5 Comments