Menu

Gumbo Component Architecture

SourceForge Editorial Staff

An Introduction to the Spark Component Architecture

by Deepa Subramaniam


Flex 4 will be applicable to a much wider range of tasks and people. As we work to fulfill one of the three major themes for Flex 4, Design in Mind, we will provide a framework that allows developers and designers to work seamlessly together in an unfettered environment to create rich content previously unseen in Flex applications. To create this type of framework, we have taken the best of the Flex 3 component architecture (from here on out referred to as the MX component architecture) and merged that with a new set of classes and functionality that results in the Flex 4 component architecture.

This whitepaper aims to introduce the ins-and-outs of the Spark component architecture, serve as a foundation for building Spark components, and highlight where Spark provides functionality previously difficult in the MX model. Wherever possible, this document links to existing Spark specifications that have been published on the Open Source Flex site.

There are several goals we aim to achieve with the Spark component and skinning architecture. These include:

  • Design Friendly: Spark provides a clean separation of visuals and logic including a new skinning architecture that allows for easy and powerful visual customizations.
  • Composition: Spark components act as 'building blocks', meaning component functionality can be composited together in an easy fashion to build up (or down) to the end result the Flex designer or developer desires.
  • Extensibility: Spark components are easy and straightforward to extend.

The following sections describe the Spark component architecture in detail and how each element of this new architecture satisfies one or more of the above-stated goals. While MX satisfies some of these goals today, the Spark component architecture does so in a more formal and tool-able manner.

Interoperability between MXSpark and Spark

The Spark architecture is going to be built atop the same base classes used by the MX architecture (e.g., UIComponent) while introducing new base classes which are discussed below in more detail. This means that applications written in MX can incrementally adopt the Spark components without difficulty. While the details remain to be worked out, we would like to see MX components usable within Spark containers, and Spark components should be usable inside MX containers. Horizontal infrastructure like focus and drag and drop will be the same between the two models.

Spark introduces many new classes, some of which are re-implementations of existing MX classes (and thus have the same name). In order to disambiguate between MX and Spark, namespaces are used. For more information on how namespaces are used for disambiguation between MX and Spark content, please refer to the 'Dropping the Fx Prefix' document.

Additionally, all Spark applications target Flash Player 10 exclusively.

A high-level view of the Spark architecture

Core to the Spark component architecture is the idea of factoring a control into different classes that handle select pieces of behavior. This clear separation allows for a better understanding of the inner workings of a component and makes the addition of new behavior easier and more predictable.

The main component class, the one whose class name matches the component's MXML tag name, encapsulates the core behavior of the component. This includes defining what events the component dispatches, the data that component represents, wiring up any sub-components that act as parts of the main component, and managing and tracking internal component state (we discuss states in more detail below).

Coupled with that component class is a skin class which manages everything related to the visual appearance of the component, including graphics, layout, representing data, changing appearance in different states, and transitioning from state to state. In the MX model, Flex component skins were assets responsible for only one part of the graphics of a component. Changing any other aspect of a component's appearance, like layout or visualization of states, required sub-classing the component and editing the ActionScript code directly. In the Spark model, all of this is defined declaratively in the skin class, primarily through new graphics tags called FXG tags. To learn more about the new graphics tags in Spark, jump ahead to the Graphics in MXML section below.

Let's take a moment to briefly discuss states and how they play a role in skin management. The concept of States was introduced in MX where a single state encapsulated a collection of changes to a particular view so that an application (and now in Spark, a component) could be partitioned into different states to govern its different views. MX states have been expanded in Spark, as detailed in the Enhanced States Syntax specification, in order to make the interaction between components and states more straightforward. As a component enters and exits different states, the component manages its state internally and instructs the skin to change states in order to assume a different visual appearance. For example, when a user hovers over a Button control, the component instructs the Button skin to assume the over state. In Spark, when a component enters a new state, the component may instruct its skin class to assume a different visual appearance defined within the skin class. This pairing of duties allows Spark components to easily, and in a declarative manner, modify the visuals of a component based on interactive behavior.

The New Component Lifecycle section below, describes in detail the Spark component lifecycle, including how a component defines its behavior and sub-components, associates itself with a particular skin class, and manages itself in different states.

Spark Base Classes

As the Spark component architecture evolves, the need for new base classes that the Spark components extend has surfaced. New in the Spark model are 5 new base classes associated with component development: SkinnableComponent, MXMLComponent, Skin, ItemsComponent and Group. We enumerate the purpose of each class in the table below. | spark.component.supportClasses.SkinnableComponent \
(extends UIComponent) | Adds functionality for Spark components whose visual appearance is defined in a separate skin class associated at runtime with the main component class. All out-of-the-box Spark components are based off SkinnableComponent.  \ | | spark.components.Group \
(extends UIComponent) | The main container class for use in Flex that manages MX and Spark content. The following section, Group In-Depth, describes the purpose and containment rules for this new class in more detail. | | spark.components.MXMLComponent \
(extends Group) | Adds functionality for components whose visual appearance will be defined in MXML, but are not skinnable. Most user defined components will derive from this class. | | spark.component.SkinnableContainer \
(extends SkinnableComponent) | Base class for all skinnable components that can have content items. Classes like Panel and List extend from this. | | spark.skins.SparkSkin \
(extends Group) | A container class for declarative skin elements. |
The Spark SkinnableComponent, MXMLComponent, Group, and SkinnableContainer specifications provide more details about the inner workings of each new base class. The MXMLComponent specification will be posted shortly.

Graphics in MXML

New in Spark is a graphics tag library, called FXG (formerly referred to publicly as MXML-G), which includes graphic and text primitives, grouping structures and basic transformation capabilities. FXG includes tags for creating basic shapes like <rect>, <ellipse> and <path>, tags and attributes for filling and stroking those shapes with different colors, gradients or bitmaps, as well as supports for filter, masks, alphas and blend modes on FXG elements. The FXG rendering model follows very closely the Flash Player 10 rendering model.</path></ellipse></rect>

The Flex implementation of FXG contains high level graphical elements that all implement the IGraphicElement interface. Those graphical elements include BitmapGraphic, Ellipse, Line, Path, Rect and TextGraphic. Graphical elements do not necessarily always have a display object backing them which they draw into. For performance reasons, a graphical element will create a display object when it needs to (like when a transformation, filter or mask is applied to it) otherwise it will draw its content into a parent or sibling graphical element. APIs are provided to get the display object any graphical element is drawing into.

FXG as an Interchange Format

FXG also defines an interchange format for use in Thermo (a new designer-friendly Adobe tool for creating RIA's) and it matches the graphics capabilities described above. FXG is a defined subset of MXML meaning it does not include MXML concepts like external class references, data binding, styles, event handlers, states, transitions, and more. In certain cases, FXG tags can be used within an MXML document. By choosing what kind of file (.fxg or .mxml) the FXG lives in, or which namespace to qualify the usage with, you can decide whether you want to use MXML features together with the graphics tags. When qualifying graphics tags with the FXG namespace, the MXML features described above are off-limits for use with the graphics tags. When qualifying graphics tags with the MXML namespace, those features like data binding and states can be used. Most skin classes in the Spark framework are MXML documents which primarily use FXG tags to describe the visual appearance of Spark components.

For more information on the FXG interchange format, please refer to the FXG 2.0 specification.

Group In-Depth

Let's dive a little more into one of the new Spark base classes, Group. Groups contain content items. Content is an array of items that are completely owned and managed by the Group itself. Flex developers and designers work almost exclusively with a Group's content and this is why content is the default property on Group. You may notice that the Skin class, which extends from Group, places all of its declaratively defined visual elements comprising the component skin within the <content> tags. As you author Spark skins, you will do the same.</content>

A Group can support UIComponents, Flash DisplayObjects, IGraphicElements (the base interface class for FXG elements), and non-visual data as content items. As discussed above, for optimization purposes, an IGraphicElement is not necessarily always a DisplayObject. During validation, the Group analyzes its content to generate an internal list of children. You as a developer or designer do not have to deal with a Group's children, only its content. The generated children are all Flash DisplayObjects, added to the Group by the Group itself via traditional Flash APIs.

Generally speaking, a Group generates its children when encountering the following content items, like so:

  • A content item that is a DisplayObject, is inserted directly into the children list.
  • A content item that is an IGraphicElement and provides its own DisplayObject is inserted into the children list.
  • A content item that is a data object uses a generated item renderer to provide a DisplayObject which is inserted into the children list. You can use the itemRendererFunction property to conditionally decide which kind of renderer is generated for a given data object.
  • All other content items should be IGraphicElements that don't provide their own DisplayObject (IGraphicElements are optimized into the smallest set of DisplayObjects possible, so it's not necessarily the case that every IGraphicElement has a corresponding DisplayObject). The Group will create one Shape for each contiguous set of these in its content, and insert that Shape into its children list. The IGraphicElements will all render into this Shape.

As for layout, Groups do not have predefined layout. A layout can be associated with a Group declaratively and modified at runtime. A Group with no layout defined will default to a simple absolute position-based layout. Every content item can fully participate in a Group's layout. Layout is discussed further in the Spark Layout section below.

For more information regarding Spark Groups, please check out the Group and DataGroup specifications.

Spark Layout

Layout in the Spark model aims to correct some shortcomings that exist in the MX model. One of the biggest limitations in MX is that layout is tightly coupled with individual controls. For example, a MX List is vertical in nature and you must use a completely different control, HorizontalList, to arrange your data items in a horizontal fashion. In order to address this, in Spark, layout has been decoupled from individual controls and can be defined declaratively and modified or replaced at runtime. In the Spark model, layouts are defined independent of the class. While containment is managed by the Group class, the layout of the Group's children is delegated to an associated layout object. This allows higher level derived classes like List, Panel, Application, TabBar, etc, to leave their layout up to the component developer rather then making arbitrary decisions as to what layout that component desires. Components will use a Group to manage containment of their content, and pass down the layout defined by the component developer to the internal Group for use in laying out its content.

Additionally, the MX layout model has other deficiencies we aim to address in Spark:

  • Respecting transformations: While a MX container generally respects the scale of a component defined by the user, the rotation of the component is completely ignored. Any transformation set explicitly on a component (through the component's matrix property available on every DisplayObject) is ignored as well.
  • Controlling transformations: MX layouts allow the parent to control a component's final size and position, but does not generally give them the ability to dictate other parts of the component's transformation (specifically, rotation, scale, and possible 3D).
  • Role of the origin: MX layouts assume that the significant coordinates for a component are the bounds of the component as defined by the origin. Specifically, the MX layout system does not take into account components with content in negative coordinates, nor does it adjust based on whether the component's content is aligned to the origin or not.

In Spark, we plan to correct these problems, and more. First off, Spark layouts can operate on various types of objects including IGraphicElements and UIComponents (from here on referred to by the general term, layout content). Rather then having layout content provide only their width and height, they can provide their measured bounds in their own coordinate system. This gives Spark layouts visibility into whether layout content exists in negative coordinates or is offset from the origin.

Second, in addition to users specifying explicit widths and heights, users can provide explicit values for full transformations (including scaling, rotation, and translation) on layout content. Spark layouts have access to this information and will use it when determining how to layout their content. Layout classes should respect explicit transformations assigned by users.

Third, the concept of content bounds will be solidified. Layouts that modify the size and position of layout content will do so based on the content's bounds (as opposed to the content's origin, width and height as in the MX model). Similarly, since Spark layouts can layout all different types of objects, IGraphicElements (FXG content) will correctly include strokes in their bounds reporting to ensure proper layout fidelity.

And finally in Spark, while a parent container can specify size and transform on its layout content, individual components themselves will get the last word over their final layout. This allows for additional features like right-to-left layout, relative layout offsets and more.

The guts of the new Spark layout mechanism is covered in the Layout specification which fleshes out a new interface, ILayoutItem, that abstracts the different objects needing to be sized and arranged from the actual layout class. The new layout classes have yet to be documented in separate specifications, but currently we are planning a lightweight Spark Basic Layout class as well as Spark Vertical Layout, Spark Horizontal Layout, and Spark Tile Layout classes.

New Component Lifecycle

As described in the high-level overview section at the start of this document, the code comprising a Spark component falls into two basic categories: main component code and skin code. This clear separation of duties means the Spark model lends itself to better being able to predict where custom code should be added when you need to add new interactive behavior or visual customizations to a Spark component. In MX, this distinction could often be blurry since code governing visual behavior oftentimes overlapped with interactive behavior and vice versa.

Resolving Component Skins

The first thing to understand is how a component associates itself with its correct skin class. This mechanism is currently done through CSS. A SkinnableComponent can define a skinClass style which sub-classes set to a skin class value in order to resolve their correct skins. Remember, a SkinnableComponent, or any subclass, means the component can have its skin swapped at runtime. In Spark builds, you will notice a defaults.css file that has entries like:

Button
{
skinClass: ClassReference("spark.skins.default.ButtonSkin")
}

or

Panel
{
skinClass: ClassReference("spark.skins.default.PanelSkin")
}

These type selectors specify the correct skin class for the skinClass style for every Spark skinnable component.

Skin Parts

Next to understand is the relationship between a component and its skin which expresses sub-parts of a component. The term parts is often used to refer to the constituent elements that can make up a whole component. For example, a classic scroll bar usually has 4 parts: an up button, down button, scroll track and scroll thumb. The parameterization of a part occurs through the new &#91;SkinPart&#93; metadata which is used to define required or optional parts of the component. These parts are defined in the component class while their visual appearance and implementation are defined in the skin class. When the skin is loaded at runtime, the skin parts are resolved and assigned to properties in the component.

Parts can be marked as required or optional by setting attributes on the &#91;SkinPart&#93; metadata. You'll notice that the Spark Scrollbar component has 2 required parts: thumb and track while upButton and downButton are optional parts. Component parts can be of any type, for example you'll notice the ScrollBar upButton and downButton parts are typed to Button.

Skin States

Now let's discuss component states and their relationship with skin classes. Components can enter and exit different states and component developers can add new states as they see fit. A component summarizes its current state by setting the currentState property, which exists on every skinnable and non-skinnable component. When a component state changes, it is often the case that the skin will change to reflect this state change. This introduces the concept of skin states, which are declared in the skin class and used to identify different states the component can assume visually. Designers can use MXML to define how the visual appearance changes as the skin's state changes. A component manages its state internally, and when a component state change occurs, the component puts the skin in the correct state in order to cause the component to update visually.

Parts and states, as well as data and component properties, are the main contracts by which a component associates itself with, and communicates with its skin.

Spark Component Lifecycle Methods

Now that we understand parts and states, let's discuss in more detail what the key component lifecycle methods are in a Spark component class. You'll notice that the MX component lifecycle methods like commitProperties(), measure() and updateDisplayList() remain and are used in the component classes. These methods serve the same purpose they did in MX: commitProperties() is used to carry out the setting of newly set properties, measure() is used to determine component size and updateDisplayList() is used to position sub-components and draw them to the screen. For the most part, skinnable Spark components typically delegate their measurement and rendering to the skin but they can add view specific code to these methods as appropriate. In addition to these methods, new methods have been added to the Spark component lifecycle to manage skins and parts. Where appropriate, these new methods have the same invalidation mechanism that existed in MX, in order to batch updates in a well performing manner.

Loading Skins

Let's look at component lifecycle methods added to support the new Spark model. Lets start with the most important pair of skin managing methods: SkinnableComponent.attachSkin() and SkinnableComponent.detachSkin(). These methods manage the loading and unloading of skins and re-assignment of skin elements when a style change affects a component skin. Both methods get called in the commitProperties() phase of the component lifecycle and are typically not invoked directly in user code. detachSkin() gets called to remove references to skin parts, and remove event listeners acting on the component. attachSkin() loads the skin class for a component based on the skinClass style, finds the skin parts in the skin class and assigns them to properties in the component, and adds event listeners to the component. attachSkin() and detachSkin() are not typically overridden when writing Spark components.

Managing Skin Parts

Related to the setting and unsetting of skin parts are two important methods: SkinnableComponent.partAdded() and SkinnableComponent.partRemoved(). Whenever a skin part is added/assigned or removed/un-assigned, partAdded() and partRemoved() get invoked to handle the setup and teardown of skin parts. partAdded()'s responsibility include attaching event listeners to newly instantiated parts as well as pushing data or properties down to the skin part. Similarly, partRemoved() takes care of removing event listeners acting on the removed part as well as any other cleanup that is required when a part is removed.

Managing Skin States

Next is a pair of important methods that manage skin states. When component state changes, it is the component's responsibility to set the appropriate skin state. This happens through the SkinnableComponent.validateSkinChange() and SkinnableComponent.invalidateSkinState() methods. How in the MX model, you would never call commitProperties() directly but instead call invalidateProperties() to force the component to go through a validation phase, you never call validateSkinChange() directly. Instead, when the component state changes, the component calls invalidateSkinState() which eventually invokes validateSkinChange() in order to set the new state of the skin. In addition to these two methods is another important skin-related method: SkinnableComponent.getCurrentSkinState(). getCurrentSkinState() method which returns the name of the state the skin should enter based on current interaction. This method is the main extension point when adding new states to a skin in addition to declaring the skin state in the skin class. The reason the skin state management is factored out into a set of methods is for ease of use for the developer. It's common for developers to want to extend existing components and add new behavior. As part of that, they may want to add additional states. If the code to set the currentState was scattered across the component, it would be difficult to extend the logic that defines what state the component is in at any given moment. By centralizing all the code that sets the current state into a single location, a developer can easily override and extend this logic in a single place to meet their needs.

Advanced Animations in Spark

The effects infrastructure in Spark will be modified to address certain limitations that exist in the MX model. A MX IUIComponent contains a lot of effects-specific knowledge, specifically helper methods that are called directly by effects classes. This meant that in MX, an effect expected an IUIComponent component for animation. In Spark, we want applications to be able to apply effects to any kind of component as well as non-component objects like the new graphical elements. We are looking to write new effects and infrastructure that can operate generically on any target object.

Additionally, we are investigating advanced functionalities like the ability to blend effects that animate the same target element or overlap in some fashion. Imagine an enter/exit effect that animates the same view. In the MX model, the enter effect would be interrupted in order to play the exit effect. In the Spark model, we're looking to provide more intelligent effects where effects can detect that there is some overlap and blend together to provide a smooth transition. Other areas of investigation include animation of arbitrary types (for example, colors) and enhanced functionality in the underlying Tween timing engine.

The effects features can be read about in more detail in the Flex 4 Feature Specifications section.

Advanced Text Functionality

The Spark components will leverage the new text features available in Flash Player 10 (codename Astro). This provides Flex with an entirely new text engine that will give us significantly more control over text elements. This includes richer control over text appearances, an object oriented text model, a richer markup language as well as support for right-to-left text. With the new text features available, text elements like


Related

Wiki: Dropping the Fx Prefix
Wiki: Enhanced States Syntax
Wiki: FXG 2.0 Specification
Wiki: Flex 4
Wiki: Gumbo
Wiki: Spark Basic Layout
Wiki: Spark DataGroup
Wiki: Spark Group
Wiki: Spark Layout
Wiki: Spark SkinnableContainer
Wiki: Spark Skinning

Want the latest updates on software, tech news, and AI?
Get latest updates about software, tech news, and AI from SourceForge directly in your inbox once a month.