« Previous - Version 15/135 (diff) - Next » - Current version
Anonymous, 11/20/2011 12:59 pm


BML 1.0 Standard

This document introduces and describes version 1.0 of the Behavior Markup Language standard. This document contains background information, descriptions of typical use contexts, and, most importantly, the syntactic and semantic details of the XML format of the Behavior Markup Language.

Introduction

The Behavior Markup Language, or BML, is an XML description language for controlling the verbal and nonverbal behavior of (humanoid) embodied conversational agents (ECAs). A BML block (see example in figure below) describes the physical realization of behaviors (such as speech and gesture) and the synchronization constraints between these behaviors. BML is not concerned with the communicative intent underlying the requested behaviors. The module that executed behaviors specified in BML on the embodiment of the ECA is called a BML Realizer.


Figure 1: Example of a BML Request

Core Standard and Extensions

The BML Standard consists of a small and lean core, plus a few clearly defined mechanisms for extending the language.

Lean Core Standard

The Core of the BML Standard defines the form and use of BML blocks, mechanisms for synchronisation, the basic rules for feedback about the processing of BML messages (see later in this document), plus a number of generic basic behaviors. BML compliant realizers implement the complete BML Core Standard and provide a meaningful execution for all its behavior elements. Some realizers might offer only partial compliance, for example because they only steer a head (and therefore do not need to interpret bodily behaviors). In that case, a realizer should at least provide an exception/warning feedback when being requested to execute unsupported Core Standard behaviors (REF FEEDBACK SECTION).

{{div_start_tag(core_summary, inset)}}
table{border:none;float:right} |*What:* |BML Core Standard. | |*Status:* |Mandatory. | |*XML namespace:* |http://www.bml-initiative.org/bml/bml-1.0 | |*Examples:* |basic speech, point (INSERT REFS) | {{div_end_tag}}

Extensions

BML provides several standardized mechanisms for extension. One can define new behaviors (in a custom namespace), or extend upon Core behaviors by adding custom attributes. /Description extensions/ provide a standardized manner for a user to give more detail about how the BML Realizer should realize a given instance of a core behavior, while allowing a fallback to the Core specification when the BML Realizer does not support the extension.

The BML standard defines a number of Core Extensions, both in the form of additional behaviors and in the form of description extensions. The Core Extensions provide behaviors and description levels that we do not want to make mandatory, but we do want to be implemented in a standardized way whenever a BML Realizer implements them. We encourage authors of realizers to collaborate and define shared behavior types and description extensions.

{{div_start_tag(coreext_summary, inset)}}
table{border:none;float:right} |*What:* |BML Core Extensions. | |*Status:* |Optional, but if a realizer implements the functionality of a Core Extension, it should exactly follow the standard specification. | |*XML namespace:* |http://www.bml-initiative.org/bml/... (last part is specified in the definition of the Core Extension) | |*Examples:* |FACS face expressions, SSML description extension for speech (INSERT REFS) | {{div_end_tag}}

Global Context

SAIBA

The Behavior Markup Language is part of the SAIBA Multimodal Behavior Generation Framework (see Figure 2 below). In this framework, the intention for the ECA to express something arises in the Intent Planner. The Behavior Planner is responsible for deciding which multimodal behaviors to choose for expressing the communicative intent (through speech, face expressions, gestures, etcetera) and for specifying proper synchronisation between the various modalities. This multimodal behavior is specified in the form of BML messages. A BML Realizer is responsible for physically realizing the specified BML message through sound and motion (animation, robot movement, ...), in such a way that the time constraints specified in the BML block are satisfied. At runtime, the BML realizer sends back feedback messages to keep the planning modules updated about the progress and result of the realization of previously sent BML messages, allowing, e.g., for monitoring and possible error recovery.


Figure 2: SAIBA Framework

{{div_start_tag(intentplanning, inset)}}
The exact nature of the intent and behavior planning processes is left unspecified here. As far as the BML Realizer is concerned, it makes no difference whether BML messages are the result of a complicated multimodal affective dialog system, or are simply predefined BML messages pulled from a library of pre-authored materials. {{div_end_tag}}

BML Messaging Architecture

BML does not prescribe a specific message transport. Different architectures have drastically different notions of a message. A message may come in the form of a string, an XML document or DOM, a message object, or just a function call. However, no matter what message transport is used, the transport and routing layer should adhere to the following requirements:

  • Messages must be received in sent order.
  • Messages must contain specific contents that can be fully expressed as XML expressions in the format detailed in this document.

Currently, there are two types of messages:

  • BML Requests.
    • Sent by the Behavior Planner to the Behavior Realizer.
    • BML requests are sent as <bml> blocks containing a number of behavior elements with synchronisation.
  • Feedback Messages.
    • Sent by the Behavior Realizer.
    • Used to inform the planner (and possibly other processes) of the progress of the realization process.

The BML Realizer

Conceptually, BML Realizers execute a multimodal plan that is incrementally constructed (scheduled) on the basis of a stream of incoming BML Requests (see Figure 3). A BML Realizer is responsible for executing the behaviors specified in each BML request sent to it, in such a way that the time constraints specified in the BML request are satisfied. If a new request is sent before the realisation of previous requests has been completed, a composition attribute determines how to combine the behaviors in the new request with the behaviors from earlier requests [REF TO SPEC OF COMPOSITION ATTRIBUTE].

Each BML Request represents a scheduling boundary. That is: if behaviors are in the same BML request, this means that the constraints between them are resolved before any of the behaviors in the request is executed.


Figure 3: Dealing with an incoming stream of BML Requests

XML Format: Values and Types

Before describing the various XML elements in the BML Standard, we describe here the available attribute types.

We use camelCase throughout for element names and attribute names. Values of type openSetItem and closedSetItem defined in this document are generally all uppercase. The names of default synchpoints for various behavior types are written as lowercase with underscores to separate words (e.g., stroke_start).

Attribute Value Types

Values for various types of behavior attributes can be one of the following:

  • ID: An identifier that is unique within a specified context (see <bml> and "behavior element"). Adheres to standard XML type ID
  • synchref: describes the relative timing of sync points (see Section SYNCHRONISATION)
  • worldObjectID: A unique ID of an object in the character’s world. Adheres to standard XML type ID
  • closedSetItem: A string from a closed set of strings, where the standard will provide the exhaustive list of strings in the set.
  • openSetItem: A string from an open set of strings, where the standard may provide a few common strings in the set.
  • bool: A truth value, either "true" or "false"
  • int: A whole number
  • float: A number with decimals
  • angle: A float number specifying angle in degrees counterclockwise, from (-180, 180].
  • string: An arbitrary string
  • direction: A particular closedSetItem type from the ClosedSet [LEFT, RIGHT, UP, DOWN, FRONT, BACK, UPRIGHT, UPLEFT, DOWNLEFT, DOWNRIGHT]
  • vector: a string of format “float; float; float” indicating the x, y, and z coordinates of a vector

Coordinate System and Units

While we prefer specifying behavior by common verbs and nouns, for some attributes or applications it is unavoidable to use precise vectors.

All units are in kms (kilograms, meters, seconds).

BML assumes a global coordinate system in which the positive Y-axis is up. The local (character-based) coordinate system adheres to the guidelines of the H-Anim standard ( v1.1 and H-Anim ): "The humanoid shall be modelled in a standing position, facing in the +Z direction with +Y up and +X to the humanoid’s left. The local character-based origin (0, 0, 0) shall be located at ground level, between the humanoid’s feet."

Scheduling.png (62.9 kB) Anonymous, 11/20/2011 12:42 pm

gesturephases.png (177.9 kB) Anonymous, 11/20/2011 02:31 pm

Extension.png (19.2 kB) Anonymous, 11/20/2011 03:08 pm

bmlexample.png (18.9 kB) Anonymous, 11/20/2011 03:09 pm