Header

Search

Machine Learning, GenAI, and LLMs

Machine learning algorithms are used in many disciplines at the University of Zürich. Whether you are using traditional algorithms like Random Forests and Naive Bayes, or leveraging Large Language Models (LLMs) and Generative AI, Science IT provides specialized expertise to bridge the gap between complex code and high-performance execution. We connect complex model architectures with the massive power of high-performance computing (HPC) and GPU-accelerated clusters. By focusing on reproducible, multi-platform workflows we help you navigate the complex ecosystem of hardware, software, and interconnects required for modern AI at scale.

Example Questions

  • Getting Started: Are you new to AI and unsure how to move beyond web interfaces to build custom local pipelines?
  • Efficiency: Is your code (e.g., based on PyTorch) running too slowly, or are you struggling to fit large models into available GPU memory?
  • Scaling: Do you need to scale your research across multiple GPUs on a single node—or even across multiple GPU nodes—but aren't sure how?
  • Reproducibility: Are you struggling with dependency roadblocks or software incompatibilities that hinder your research? 
  • Customization: Do you want to move beyond standard tools (like AlphaFold) to build custom workflows—such as RAG (Retrieval-Augmented Generation) or LLM fine-tuning—and need help setting up specialized software environments (CUDA, PyTorch, Hugging Face) required to run them?

Expert Guidance and Support

If your research requires the use of GenAI or high-performance GPU computing, Science IT experts provide hands-on consulting for:

  • Workflow Integration: Selecting the right tools to integrate your research data seamlessly with models on GPU and HPC clusters.
  • Scaling & Parallelization: Distributing workloads efficiently across multiple GPU nodes to reduce training and inference time.
  • Reproducibility through Containerization: Using tools like Apptainer and Docker to package your entire software stack, eliminating "dependency friction" and ensuring portability across platforms and suitability for running on HPC clusters.
  • Model Efficiency: Practical strategies for fine-tuning large models to fit within available hardware constraints.
  • Custom AI Development: Support for languages and frameworks (with an emphasis on Python) to move your research from "out-of-the-box" tools to custom implementations.

How to initiate an expert service

  • Get in contact with Science IT and explain briefly what type of service you are interested in together with a short description of your use case and needs. contact Science IT
  • Based on the information you provided, your request will be routed to a suitable expert, who gets in contact with you to follow up on the remaining steps.

Terms and conditions

  • Efforts-based costs
  • Expert services are restricted to UZH researchers and groups.
  • More details (incl. costs and agreement templates) are available in the UZH Intranet.