Building Self-Hosted Generative AI Solutions at Scale - Munich


Event essentials

Event Location

upside east

Start date - End date

Location details

Last registration date

Free seats


Register Button

In collaboration with NVIDIA, HPE invites you to attend our ‘Building Self-Hosted Generative AI Solutions at Scale’ workshop in Munich. 
If you are an IT professional, Machine Learning Engineer or a Data Scientist tasked with implementing Gen AI Solutions at Scale, then this is the event for you!
What will you get out of the day:
  • An understanding of the challenges and solutions to successfully deliver Gen AI projects in your organization. Learn:
    • How to actually build your own ML platform
    • What infrastructure, platform software and resources you need
    • How you can scale your AI model training across billions of parameters in an efficient and sustainable way. And why hyper-parameter optimization is key.
    • How to maintain complete reproducibility through data and experiment lineage
  • HPE and NVIDIAs joint strategy to help customers leverage Large Language Models and build domain specific RAG application. Learn: 
    • How our direction can improve your business operations
    • How to avoid the pitfalls and how organizations can fall short
    • How to avoid escalating costs of cloud and why you should consider on-prem deployment
    • What kind of business outcomes are possible through this emerging technology
    • How HPE and NVIDIA provide an unmatched better together solution for Generative AI
  • Interactive discussions with AI experts o Generative AI and LLMs
  • Hands-on AI/ML exercises and demonstrations (remember to bring your laptop!) 


Plus hear from our guest speakers:



Cyrill Hug

Manager, AI Solutions Engineering
at HPE

Visit his LinkedIn profile


Jordan Nanos

Machine Learning Architect and
Master Technologist at HPE

Visit his LinkedIn profile


Alex Reddington

HPC & AI Solution Architect at HPE

Visit his LinkedIn profile


Guillaume Barat

EMEA Alliance Director at NVIDIA

Visit his LinkedIn profile


12:00 - 13:30

Registration & Lunch

13:30 - 14:00

Welcome & Agenda

14:00 - 14:45

Optimized infrastructure for AI-at-Scale workloads

14:45 - 15:30

Model Development, Training and Tuning at Scale

Train large-scale machine learning models faster while hiding the complexity of underlying heterogeneous infrastructure

15:30 - 16:00

Coffee break

16:00 - 16:45

Model Deployment & inference at Scale

Deploy & manage models and run inference on heterogeneous infrastructure from data center to edge

16:45 - 17:50

Optional hands-on session

Foundation models and RAG implementing practical GenAI

17:50 - 18:00

Closing comments


Please join us for drinks and canapés

Register Button

Additional info