ML

The ML Interface provides methods for creating and managing machine learning contexts in web applications.

Web IDL

web-idl
enum MLDeviceType {
  "cpu",
  "gpu",
  "npu"
};

enum MLPowerPreference {
  "default",
  "high-performance",
  "low-power"
};

dictionary MLContextOptions {
  MLDeviceType deviceType = "cpu";
  MLPowerPreference powerPreference = "default";
};

[SecureContext, Exposed=(Window, DedicatedWorker)]
interface ML {
  Promise<MLContext> createContext(optional MLContextOptions options = {});
  Promise<MLContext> createContext(GPUDevice gpuDevice);
};

Methods

createContext()

Syntax

create-context.js
createContext()
createContext(options)
createContext(gpuDevice)

Parameters

  • options: A MLContextOptions object providing the application’s preferences
  • gpuDevice: A specific GPUDevice object to use with the context

Return Value

A MLContext

Description

The ML interface enables creation of machine learning contexts with specific device and power preferences. It provides two methods for creating contexts - one accepting configuration options and another accepting a specific GPU device.

Device Type Options

An enumeration defining supported device types for ML operations:

  • "cpu": CPU-based processing
  • "gpu": GPU-based processing
  • "npu": Neural Processing Unit-based processing

"cpu"

  • Provides maximum compatibility across devices
  • Variable performance based on hardware capabilities
  • Default option if not specified

"gpu"

  • Offers highest performance potential
  • May fall back to other devices for certain operations
  • Requires compatible graphics hardware

"npu"

  • Optimized for power efficiency
  • Suitable for sustained workloads
  • May fall back to other devices for unsupported operations

Power Preference Options

An enumeration specifying power consumption preferences:

  • "default": System-determined behavior
  • "high-performance": Prioritizes speed over power efficiency
  • "low-power": Prioritizes power efficiency over performance

"default"

  • Lets the user agent determine optimal behavior
  • Balanced approach to performance and power consumption

"high-performance"

  • Maximizes execution speed
  • May increase power consumption
  • Suitable for compute-intensive tasks

"low-power"

  • Minimizes power consumption
  • May reduce performance
  • Ideal for battery-powered devices

ML Context Options

A dictionary of configuration options for ML contexts:

web-idl
dictionary MLContextOptions {
  MLDeviceType deviceType = "cpu";
  MLPowerPreference powerPreference = "default";
}

Examples

Creating a Context with Default Options

example-1.js
try {
  const context = await navigator.ml.createContext();
  // Context ready for use
} catch (error) {
  console.error('Failed to create ML context:', error);
}

Creating a Context with Specific Options

example-2.js
const options = {
  deviceType: "gpu",
  powerPreference: "high-performance"
};
 
try {
  const context = await navigator.ml.createContext(options);
  // High-performance GPU context ready
} catch (error) {
  console.error('Failed to create GPU context:', error);
}

Using with a Specific GPU Device

example-3.js
// Assuming we have a GPUDevice instance
const gpuDevice = await navigator.gpu.requestAdapter()
  .then(adapter => adapter.requestDevice());
 
try {
  const context = await navigator.ml.createContext(gpuDevice);
  // Context created with specific GPU device
} catch (error) {
  console.error('Failed to create context with GPU device:', error);
}

Specifications

WebNN APIStatus
ML interfaceCandidate Recommendation Draft

Security Requirements

  • Requires a secure context
  • Available in Window and DedicatedWorker contexts

Browser Compatibility

The ML interface is under active development and browser support may vary.

See Browser Compatibility: WebNN API

Development Status

The ML interface and MLContextOptions are actively being developed. Current considerations include:

  • Implementation of fallback device mechanisms
  • Support for multiple devices in preferred order
  • Device exclusion capabilities
  • Enhanced error handling
  • Quantized operator support

See Also