Skip to content

MLPerf Client 1.5.0

Latest

Choose a tag to compare

@mrxmartino mrxmartino released this 17 Nov 15:51

What’s new in MLPerf Client v1.5

MLPerf Client v1.5 has a host of new features and capabilities, including:

Acceleration updates

  • Support for Windows ML
  • Runtime updates from IHVs

GUI updates

  • Various tweaks for usability and legibility
  • GUI moves out of beta

An iPad app

  • GUI only

A Linux version

  • CLI only, experimental

MLPerf Power/energy measurements

  • An experimental addition
  • Optional CLI tools and documentation
  • Works in conjunction with MLPerf Power and SPEC PTDaemon

Various improvements to the benchmark organization

  • Introduction of the "extended" category for non-base, non-experimental benchmark components
  • The Llama 2 7B Chat model moves from base to extended
  • Some components move from experimental to extended:
    -- 4K and 8K prompts
    -- Phi 4 14B Reasoning model
  • Moving out of experimental:
    -- Windows ML support
    -- Llama.cpp on Windows and Mac

Known issues

In the GUI version of the application, on AMD-based systems, the following configs may crash:

  • ORT GenAI GPU DirectML
  • ORT GenAI Ryzen AI NPU-GPU

This crash is the result of an enumeration issue in the application where these configs conflict with Windows ML.

To work around this issue, we have released a build of the GUI application for Windows x64 without Windows ML support. Users of AMD systems may use this version of the benchmark to test the two ORT GenAI-based paths noted above.