Glossary of Terms
Click on the letters below to view glossary terms
A
Alpha Channel
Attenuation
Attribute
B
Banding
Blend Colors Node
Bump Depth
Bump Value
C
Clamp Node
Color Remapping
Condition Node
Contrast
Contrast node
Cosine
Crater Attributes
Cross Product
D
Discretized
Distance Between Node
Dot Product
E
Expression
F
Facing Ratio
Filter Size
G
Gain/Offset
H
Hue
HSV
HSV to RGB node
L
Light Direction
M
Material
Matrix
N
Node
NormalCamera
O
Out Color
Out Alpha
Out Normal
P
Parameterization
Perturb
Point Being Shaded
Point Constraint
R
Ray Direction
RGB
RGB to HSV node
S
Sample Distance
Sampler Info Node
Sampling
Saturation
Set Range Node
Shader
Shading Group
Shading Map
Shadow Color
Sourcing
Surface Luminance Node
Surface Normals
Surface Shader
T
Texture
Texture Space
Time Node
U
UV Space
V
Value
Variables
Vector
Vector Product Node
W
WorldMatrix and WorldInverseMatrix
World Space Coordinates
Alpha Channel

The alpha channel, sometimes called a Mask or Matte channel, contains information about the opacity of objects in an image. The alpha channel is a gray scale representation of how much opacity the image has. Areas of the image intended to be fully transparent are black in the alpha channel.

Opaque regions of the image are white in the alpha channel. Semi-transparent portions of the image are represented by shades of gray in the alpha channel. The alpha channel is a single floating point value. It should be noted that opacity and transparency are inverse quantities. An opacity of 1 corresponds to 0 transparency.

See Also: Out Alpha, Out Color


Attenuation

Attenuation is the modification of a value over a range.


Attribute

In Maya, attributes establish the characteristics of a node. They are parameters or controls which affect how a node computes and thus affect the final output from that node. For example, the Sections attribute on a sphere controls how many sections the sphere will be comprised of.

Attributes are like variables for a node. They provide input and out put values that can be communicated to other nodes through connections. Attributes hold values that can be modified much like variables. Attributes can be used as variables inside expressions and MEL scripts. This is a good way to write scripts and expressions because the attributes are available in the interface to the user for manipulation.

Attributes, like variables, have a type such as float and vector. When you connect one attribute to another you need to make sure they are compatible attribute types. Maya will only allow you to connect attributes of the same type.

You can create your own attributes. These are often called custom or dynamic attributes. A dynamic attribute is an attribute that you can add or delete. A static attribute is an attribute that Maya reserves and does not allow you to delete. You create a new attribute for the selected node under the Modify -> Add Attribute menu. You can delete attributes by RMB selecting an attribute in the Channel Box and selecting Delete Attribute.

You can also manipulate attributes in MEL. The addAttr, delAttr, setAttr, listAttr, and getAttr commands are used to control attributes in scripts and expressions.

See also Node and Variable


Banding Back to Top

A rendering or display artifact generally attributed to less than optimal color resolution but also used to describe artifacts that rings, stripes or areas of stepped intensity in image components. Related to pixelation or other artifacts that result from too small a range of allowable values.


Blend Colors Node

Blend Color node is a rendering utility node that lets you take two textures (or other inputs) mix them together based on an percentage you define and output the resulting values. The Blend Colors node has inputs called Color1 and Color2 that you connect your two textures (or other desired input) to. Adjusting the Blender attribute then mixes the two textures together. The Blender attribute has a range between 0 and 1. As the Blender increases, you see more of Color 1 and less of Color 2. As Blender decreases, you see more of Color2 and less of Color1. It can be useful to texture map or attach an expression to the Blender attribute.


Bump Depth

The Bump Depth attribute on the bump2d and bump3d nodes is a multiplier on the Bump Value. It can be used to increase or decrease the overall intensity of the bump effect. See also Bump Mapping, Bump Value, normalCamera and outNormal.

The valid range of bump depth is open to positive and negative numbers but in general, values between 0 and 1 typically return the best results. However, using a negative value such as -1.0 will reverse the direction of the bump effect which can be useful at times.


Bump Value

The bump node uses the Bump Value to calculate a normal offset and adds it to its normalCamera input to get the output normal.

See also Bump Mapping, Bump Depth, normalCamera, outNormal.


Clamp Node Back to Top

The clamp node takes input values and limits the outputs to a range of values defined by the user by setting the Min and Max attributes. Input values above and below the specified range of allowable outputs will be "clamped". This means that values less than the Min attribute will be output as the Min value and values greater than the Max attribute will be output as the Max value.

See also SetRange Node.


Color Remapping

Color Remapping is the process of replacing one set of colors with another by passing a single out value (often the V parameter from an HSV node) to the V Coordinate of a ramp. The color output from the ramp is the color found at the V Coordinate dictated by the input value.

In the illustration below, the outColor from a File Texture node is connected to the inRgb of an RGB To HSV node. The Rgb To Hsv node then converts the rgb values from the file texture to seperate Hue, Saturation, and Value, values.

The outHsvV (or Value parameter) from the RGB To HSV node is connected to the vCoord (or V Coordinate) of the ramp. The ramp uses this value to find the color at the matching V Coordiante value. For example, if the Rgb To Hsv node passed a value of 0.7 to the ramp, the ramp would find the color at position 0.7 (in this case bluish green), and then output that color.

The outColor from the ramp is then connected to the color parameter of the Lambert material node.

As a result the original colors from the File texture node are replaced with corresponding colors from the ramp.

The color remapping process is often used to turn color images into grayscale images by using a grayscale ramp.


Condition Node

Condition nodes compare two terms according to a user definable operation such as "Is equal to" or "Is greater than", and then outputs one of two possible compound attributes, Color1 or Color2, accordingly. Note that since Color1 and Color2 are 3 way compound attributes, and that they can be mapped, they can be used to output any 3 way compound attribute such as the Translation, Rotation or Scale of a transform node, or the RGB values of a texture node.

Condition nodes requires one logical operator and two choice values, as shown in the following example:

    If (A operation B)
    Outcolor = Color1
    else
    Outcolor = Color2

The Condition node compares the value in A with the value in B to find out whether the value in A is greater, smaller, equal to, less than, less than or equal to, greater than or equal to, or not equal to the value in B. If the condition evaluates as "True" Color1 will be the output, and if the condition evaluates as "False" then Color2 will be evaluated.


Contrast

A measure of the difference in brightness between the light and dark areas of an image. An image with high contrast has a large variation between light and dark values. For example two pixels next to each other, one shaded black and the other white would have high contrast. Changing the contrast of an image affects the relationship between dark values and light values in the image.

See Also: Contrast node


Contrast node

Contrast nodes are used to increase or decrease the contrast in a texture. The contrast of each of the R, G, and B channels can be controlled individually. When Contrast is increased, light colors become lighter and dark colors become darker. When contrast is decreased, all the colors of the texture are brought closer to the middle range. The Bias attribute controls the middle adjustment of the Contrast control. Increasing Bias will make more of the image dark as Contrast is increased, while decreasing Bias will make more of the image bright as Contrast is increased.

See Also Contrast


Cosine (and other math functions)

When you use any of the math functions in Maya (sin, cos, gauss, noise, etc.) you generally provide a value to that function and get back a result value from that function (like a calculator). For example the following could be an expression on an object:

    ball.ty = cos (time);

Maya works more like a graphing calculator though since you can take the resulting value and control just about any attribute in the software. Sin and cos are commonly used since they provide output values that oscillate between -1 and 1 by default when a continuous increasing value is supplied as input.

For a list of other math functions available in Maya, check the Insert Functions menu inside Windows -> Animation Editors -> Expression Editor

See Also Expression, Time, Nodes


Crater Attributes

The Crater attributes Shaker and Melt control the edge quality and dispersal of perturbed normals or "craters". The Normal attributes Depth, Melt, Balance, and Frequency control the magnitude, edge quality, and detail level of the perturbed normals.

See also outNormal, Bump Depth, Perturbed and normalCamera.


Cross Product: See Vector Product Node


Discretized (Made Discreet) Back to Top

To make distinct or seperate entities out of a whole. In the context of building packets of pixels a truncation or stepping function is used to remove the decimal component resulting in whole steps or chunks of values across the surface. For example truncating the decimal portion from the sequence (1.1, 1.5, 1.9, 2.1, 2.5, 2.9) creates the discretized sequence (1,1,1,2,2,2).


Distance Between Node

Use the distanceBetween node to calculate the distance between two points in space. Using this special node is much easier than using an expression or several utility nodes to find the distances in 3D space. The Distance Between node performs the following calculation:

To create a Distance Between node, you need to use the mel command:

    createNode distanceBetween;

This node will not appear in Hypershade. Open Hypergraph or the Outliner and MMB drag the new node into the Hypershade Work Area to add it to your network. See also the function mag in the Maya documentation.


Dot Product See Vector Product Node


Expression Back to Top

Expressions are instructions you type to control attributes over time. An attribute is a characteristic of on object, for instance scaleX, translateY, colorR and so on. Expressions are stored and evaluated by Maya in an expression node. Expression nodes are commonly created automatically by creating an expression using Animation Editors-> Expression Editor, however, there are other ways to make expressions.

Although expressions are written using MEL syntax, they differ from MEL scripts in that they are saved as part of the Maya scene file (as opposed to on disk or in memory). Furthermore, expressions evaluate on each frame of the animation playback and also in cases when the attributes controlled by the expression are altered.

For example

    ball.translateX = cone.translateY + 2;

is a simple expression to base the x translation of a ball object in the scene based off of a cone's Y translation. Expressions provide a method for the user to establish and modify relationships between objects, gather data about objects in the scene, and alter how Maya evaluates various pieces of data throughout the scene over time or during render time.


Facing Ratio Back to Top

Facing Ratio is a value between 0 and 1 based on the angle at which a point being shaded faces the camera. If the point is directly facing the camera, that is to say the surface normal at the point being shaded is pointing directly at the camera, then the Facing Ratio value is 1. If the surface normal is perpendicular to the camera, then the Facing Ratio value is 0.

Filter Size

Filter size generally refers to the kernel size of a filter or the amount of neighboring pixels which influence the current pixel's value. The term 'filter' comes from signal processing, a low pass filter will make an image look fuzzy because it takes out any high frequencies (i.e. sharp transitions at the edge of objects). A high pass filter will increase the frequency and hense the contrast at the edge of objects. For depth map shadow filtering a higher filter size value will result in softer shadows but longer computation times. 32 is the maximum filter size available to dmap shadows.

See also DMap..


Gain/Offset Back to Top

GAIN and OFFSET (usually associated with Alpha Gain and Alpha Offset or Color Gain and Color Offset) refers to the manipulation of values output from a node.

Gain is the scaling factor applied to the texture's out value, and Offset is the offset factor. That is to say that Gain multiplies the texture's out value, while Offset adds or subtracts from it.

The images below show a series of surfaces that are being displaced by the outAlpha value of a Circular ramp. As the AlphaGain and Alpha Offset values are changed the amount that the surface is displaced, as well as the way that the surface is displaced, is adjusted. The texture used to map the displacement was also used to map the color. So areas with the brightest colors are subject to the greatest displacement.

Note that changing the AlphaOffset value displaces the entire surface. This is because the Offset value adds (or subtracts) from the total alpha value (so even an area with an Alpha value of zero associated with it will be displaced by the amount of the AlphaOffset value). As a result the surface is displaced as a whole.

Changing the Gain value displaces the surface only where the alpha value is not zero. This is because Gain multiplies the existing alpha value by the AlphaGain value (so an area with an Alpha value of zero associated with it will not be displaced at all since anything multiplied by zero is still zero).

Negative values can also be used. Compare the two images below. In the image on the right both AlphaGain and AlphaOffset use positive values, but in the image on the right a negative AlphaGain Value is used, so while the surface is still displaced upwards as a whole (because of the positive AlphaOffset value) the details are displaced downwards (because of the negative AphaGain value).

Color Gain and Color Offset behave identically (mathematically speaking) to Alpha Gain and Alpha Offset, but they are generally used to adjust the Color Balance of a texture by scaling or adding to the texture's rgb values. By adjusting Color Gain or Color Offset along a gray scale the texture's contrast and brightness can be adjusted, while using a color such as red or green will give the texture a new tint.

A common misconception is that Color Gain is used to darken textures while Color Offset brightens them. While this is how the attributes are often used, it is not strictly what they are for. It is more correct to say that Color Gain and Color Offset are adjusting contrast and brightness.

Color Gain is in fact scaling the color values, and Color Offset is adding (or subtracting) from them.

By default, Color Gain is set to white, so all colors are multiplied by 1 (which results in no change) and Color Offset is set to black, so all colors have 0 added to them (which also results in no change).

Using Color Gain and Color Offset together can wash out or accentuate the contrast of a texture. Note that values outside of the 0 to 1 range can be used to great effect.

As stated above, Color Gain and Color Offset can also be used to adjust the tint of a texture. Mathematically speaking Color Gain and Color Offset work the same way with color values as opposed to

Note that in the examples above, the Color Gain and Color Offset values have been adjusted using the V (Value) attribute in the Color Editor window which adjusts the color's rgb values uniformly, allowing the color to be changed from white to black along a gray scale. Adjusting the Color Gain and Color Offset's RGB values individually will change the tint of the texture.


Hue: See HSV Back to Top


HSV

HSV or Hue, Saturation, and Value, is one mode for producing and editing colors in Maya (the other method being RGB). Hue, simply put, refers to the color found on a standard color wheel, measured from 0 to 360. Saturation refers to the intensity of the color. Measured from 0 to 1, it is the amount of gray mixed into the color. The saturation value determines how pure, or how diluted, the color will be. Value refers to the overall brightness of the color. Measured from 0 to 1 it determines how bright, or how dark the color will be. HSV color mode makes editing colors easy because it deals with the "color", intensity, and brightness of a color separately. As a result, it is easy to achieve the exact shade of color desired. While color editing is possible in RGB mode, it is generally more difficult to fine tune colors because all 3 sliders, (Red, Green, and Blue) need to be adjusted to change the intensity or brightness of a color.


HSV to RGB node

Converts an HSV (Hue-Saturation-Value) color into an RGB (Red-Green-Blue) color. This is useful when you want to control a color using RGB values.

See also: RGB to HSV node, HSV


Light Direction Back to Top

The Light Direction is one of the computed outputs from the light and is accessed from the Light Data section of the light's outputs in the Connection Editor. By connecting this output to another node, you are asking Maya to provide the direction from the surface to the light at each point being shaded at render time. This can be useful information to use when augmenting surface shading or a surface's reaction to lighting.

See also LightInfo Node.


Material Back to Top

A Material node itself is simply a shading model; an algorithm that simulates the physics of how light behaves when it strikes a surface. Because a Material is a separate node from the Shading Group in Maya, it can be used in the same way as a texture or utility node within a shading network. The main materials are Lambert, Blinn, Phong, PhongE, and Anisotropic.

See also Texture, Shader and Shading Group.


Matrix

In Maya, a matrix is a collection of values that can be stored and accessed as one single entity. Think of a matrix as a grid of data. Each grid space has an address associated with it that can have data written into it or read from it. For example, you might create a matrix to store a list of data with two rows and three columns (similar to a spreadsheet). You can then access or assign information into the entries of that spreadsheet using MEL.

Here's the generalized syntax form for a matrix:

    dataType $variableName[rows][columns] = <<data separated by commas and semicolons>>;

Here's a specific example of declaring and assigning a matrix with 2 rows and 4 columns.

    matrix $myMatrix[2][4] = << 1,2,3,4,5,6,7,8 >>

Here's an example showing how to get data out of the above matrix and store it somewhere else. Note that indexing of a matrix begins at position 0, just like in most programming languages, so valid values for the first brace index of $myMatrix are 0 and 1, and for the second brace index they are 0,1,2 and 3.

    float $myValue = $myMatrix[0][2];
    print $myValue;
    //Result: 3

When you create a matrix you explicitly define the number of rows and columns. This means that you have to state exactly how many rows and columns you want in the brackets. When $myMatrix was created, $myMatrix[2][4] indicates 2 rows 4 columns, $myMatrix[0][2] refers to the first row and 3rd column of that matrix.

See Also: Variables, WorldMatrix, WorldInverseMatrix, and Vector


Node Back to Top

A node in Maya consists of specific information and actions associated with that information. Each node can receive, hold, and provide information by means of attributes. A joint is a node, a sphereShape is a node, an expresssion is a node, a texture is a node, an animation curve is a node. Nodes commonly compute some function based on their inputs to produce their outputs.

A node's attributes can connect to the attributes of other nodes, thus forming a network of nodes. As you use Maya you are creating, deleting and manipulating nodes. When you playback your animation in Maya or render an image you are seeing the evaluation of these nodes in a specific order.

See also Attribute.


NormalCamera

NormalCamera is an attribute of the Sampler Info Node. It provides the surface normal for a point being shaded in camera space, that is, relative to the camera. NormalCamera is useful in situations where the degree to which a surface (at the point being shaded) is facing the camera will affect some aspect of shading such a transparency or customized bump mapping.


Out Color Back to Top

Out color is an output attribute that exists on many rendering nodes in Maya. For example, blinn1.outColor provides the final computed shading result of blinn1, including any contributions from upstream nodes; file1.outColor provides the color computed by a file texture. This attribute can be accessed and modified by the user during render time and passed on to other entities in Maya.

See Also: outAlpha, outNormal


Out Alpha

Out Alpha is an output attribute that exists on most rendering nodes in Maya. This attribute stores a numeric value that describes the alpha channel value at each point being shaded. This information can be accessed by the user and/or connected to other nodes in Maya. This attribute is most commonly used to control transparency, bump mapping and displacement mapping.

See Also: Alpha Channel, Point Being Shaded


Out Normal

Out Normal is an output attribute on a bump2D and bump3D node. It is the "perturbed normal" that is calculated based on the input texture connected to the bump node. Connect outNormal to the Normal Camera attribute of a material to put a bump map on that material. Or, connect this to the Normal Camera attribute of another bump map to chain them together.

See Also: Perturb, Bump Mapping


Parameterization Back to Top

A surface's parameterization is a set of 2D coordinates which cover a surface and ideally are unique for the surface. That is for any given U,V index, there is one and only one point on the surface which has that parameterization. Because of the one-to-only-one point mapping, the parametric indices are commonly used to map 2D textures onto objects.

NURBS always have one-to-only-one point mapping for their parameters because the curves which Nurbs are made from provide a consistantly increasing parametric value. Polygons do not have a natural parameterization and can have a one-to-many point parametric mapping if their parametric mapping is not created carefully.

You can describe parametric coordinates as if a surface is flattened out onto a plane and placed on a grid with a coordinate system between 0 and 1 along the two axes of the grid, U and V. It is easy to visualize how each part of the surface should only flatten out to a single point on flat plane. If more than one part of the surface covers the same place on this grid, then there is a fold in the flattened surface.

When people speak of a good or bad parameterization, they are refering to two qualities of the coordinates; 1) are the parametric coordinates smoothly and evenly distributed across the surface so that 2D texture maps do not bunch or stretch? 2) do the parametric coordinates have a one-to-only-one mapping with the surface so that any part of the 2D texture map only shows up once on the object?

A NURBS surface in Maya is defined by the set of curves used to establish the boundary of the surface and Iso-parametric lines within the Nurb to establish the contour characteristics of the surface. The parameterization of the surface is derived from the parametric values of those curves.

See also UV Space and Texture Space.


Perturb

Altering the value. In the context of surface normal, a perturbed surface normal will have an altered surface normal direction and thus an altered appearance.

See also outNormal, normalCamera, Bump Depth and Bump Mapping.


Point Being Shaded

The term Point Being Shaded refers to the evaluation of a given pixel at render time. When Maya renders an image, the renderer scans the frame, pixel by pixel, firing a virtual ray from the pixel, out into the scene. If that ray strikes something such as a surface or particle, it is evaluated according to the shading network associated with. Properties such as color, reflectivity, transparency, etc. are all evaluated to see what final color that pixel should be rendered as.


Point Constraint

A point constraint causes one object ("slave object") to match the position of another object ("master object") or the average position of several objects. Point constraining one object to another causes only the translation of that object to match the translation of the other object, scale and rotations are ignored.

To create a point constraint between two objects, first select the master object then shift select the slave object then Constrain->Point. If there are multiple master objects, the weight attribute on the point constraint lets you control how tightly the slave matches the master object. This weighting can be animated to achieve effects such as picking up and dropping objects. Other constraint types in Maya include Orient, Aim, Scale, Geometry, Normal, Tangent and Pole Vector.


Point Matrix: See Vector Product Node


Ray Direction Back to Top

Ray Direction is an output attribute provided on the samplerInfo node. It is the vector from the camera to the point on the surface being shaded. The Ray Direction points at the surface while the outNormal points away from the surface toward the camera.

See also samplerInfo node, outNormal, and normalCamera.


RGB

RGB is one mode for editing colors in Maya (the other method being HSV).

Color is edited in RGB mode by adjusting the relative value of Red, Green, and Blue in the subject color. RGB values in Maya can be adjusted along a scale of 0 to 1, or 0 to 255. An equal mixture of Red, Green, and Blue will produce a neutral gray, with the actual value of each component color determining the brightness or darkness of the final color.


RGB to HSV node

Converts an RGB (Red-Green-Blue) color into an HSV (Hue-Saturation-Value) color. Some effects are easier to calculate using HSV values than RGB values. You can convert the output of any shading network into HSV values. RGB to HSV nodes are also commonly used in the Color Remapping process, by connecting the outValue value from the RGB to HSV node to the vCoordinate of a ramp.

See also: Color Remapping, HSV to RGB node.


Sample Distance Back to Top

Sample Distance is an output attribute provided on the lightInfo node. Sample Distance is the distance from the center of the light to a point being shaded (rendered). The connected spot light passes its worldMatrix[0] to the worldMatrix on the lightInfo node. This gives the lightInfo node a position in world space from which to calculate its output attribute called Sample Distance.

See also World Matrix and LightInfo Node.


Sampler Info Node

When Maya renders an image the renderer scans the frame, pixel by pixel to determine what color the pixel should be. This process, referred to as "Sampling", checks to see if there is a surface in the pixel that needs to be shaded. If there is, lighting, shadowing, contouring, and material properties of the surface are all considered to evaluate the color for that pixel.

The Sampler Info node provides information about each point on a surface as it is being "sampled", or calculated, for rendering purposes. Sampler Info can give you information such as the shading point's position in space, its orientation and tangency, its location relative to the camera, and much more.


Sampling

When Maya renders an image the renderer scans the frame, pixel by pixel, to determine what color the pixel should be. This process, referred to as "Sampling", checks to see if there is a surface int the pixel that needs to be shaded. If there is, lighting, shadowing, contouring, and material properties of the surface are all considered to evaluate the color for that pixel.

See also: Point Being Shaded.


Saturation: See HSV


Set Range Node

The setRange node allows you to take values in one range and map them to another range. To use this node, place the values that you want to remap into the oldMin and oldMax attributes. You put the new range of values into the min and max attributes. You then take the value attributes as the output in the new range.

For example if you want to map rotational values to a color you map a range of 0 to 360 to a range of 0 to 1:

    oldMin = 0
    oldMax = 360
    min = 0
    max = 1

value will then equal the output conversion depending on what the input rotation is.

See Also Clamp Node.


Shader

A shader is a general industry term for a system or network of shading elements that help define how an object looks and reacts to light when needed. Shading network is a more accurate description in Maya because shaders in Maya consist of a network of nodes that each have specialized functions. Because materials can be used, generally, as nodes inside a shading network they have a separate label compared to other 3D rendering applications. The quality or look of a surface is generally attributed to the shader or shading network that is assigned to this surface. See also Material, Shading Group and Surface Shader.


Shading Group

A Shading Group node is used to the assign the shader or shading network to the geometry is made from. In addition to geometric assignment this node is also used to make assignments between the shading network and volumetric and displacement methods of rendering.

See also Material and Shader.


Shading Map

Shading Maps are Material nodes that allow you to remap the output from one material node, with the colors from another node, to create custom shading results. In short, Shading Maps allow you to map the shading of a material node. Shading maps are used to exercise precise control over the transition from the highlight to the shaded area of a surface. Shading Maps can be used to achieve cartoon like shading effects by restricting the shading transition to bands of solid colors. Car designers also use Shading Maps to produce greater variation in shading transition to prevent a flat or monochromatic look in paint. They can act similar to a Set Range node except instead of only allowing a linear interpolation between the new, they allow stepped interpolation or non-linear interpolation.


Shadow Color

An attribute on a Maya light that provides control of the color of shadows cast by objects that are being illuminated by that light. The default color is black. Adjusting the shadow color can be useful when simulating shadows produced by transparent colored surfaces such as glass. Also, if you are using a colored light, adding a very slight tint of that light's color to the shadow color can help add realism to your renders.


Sourcing

Sourcing a MEL script file causes Maya to execute all of the MEL commands and declares all global procedures that are contained within that script file. If you modify a procedure (collection of MEL commands) in a script file, Maya will not register those changes until that procedure gets refreshed or "sourced". This is because Maya keeps executed procedures (collections of MEL commands) in memory. When you source a script file, Maya rereads the procedures in that script file.

For example, if your script is called test.mel and you make changes to test.mel in a text editor, you must execute the source command in the Script Editor, followed by the script name before Maya will know about those changes i.e. source test;


Surface Luminance Node

Surface Luminance is a Utility node that returns the luminance (brightness) of a point on a surface as it is being rendered. This luminance takes into account all the light sources shining on the object and the angle at which they shine on the object. It does not take into account the specular properties of the object itself, such as hotspots.

See also LightInfo Node.


Surface Normals

The direction, projected perpendicularly from any point on a surface, is referred to as the Surface Normal. The Surface Normal is only projected from one side of a surface.

On a NURBS surface, the side of the surface which will project the Surface Normal is determined by the U and V direction of the surface. This can be predicted by using the Right Hand Rule. Point your right thumb in the increasing U direction of the surface and your right forefinger in the increasing V direction of the surface. Pointing your middle forefinger perpendicularly to the first two will then indicate the surface normal direction.

The direction perpendicular to the opposite side of the surface is known as the Flipped Normal.

Surface Normals for polygonal surfaces are associated with the individual faces. The side of the face that projects the normal is determined by the order of the poly vertices (poly vertex numbers can be displayed by opening the Display > Custom Polygon Display > Options window, and turning on Show Item Numbers/Vertices), and can be determined by using the Poly Right Hand Rule. If you cup the fingers of your right hand, twist your wrist so that your fingers are cupped in the direction of the vertices and then extend you thumb, your thumb will be pointing in the direction of the face's Normal.


Surface Shader

The Surface Shader is different from the other material types in Maya because it is not a shading model. Instead, it is simply a pass through node which does not compute lighting or any mathematical combination of the input attributes. It is designed to convert arbitrarily named input attributes to attribute names that will be recognized by the shading engine. As a result, all of its shading properties are defined by the shading network that is plugged into it. When used without any form of shading model input, Surface Shaders render with no lights and look like self-illuminated surfaces such as LEDs, lava, or neon signs.

See also Shader, Shading Group and Material.


Texture Back to Top

A texture is a pattern or image that is used as input or as a basis to augment the material's shading model. Textures are derived from either images (file textures) or procedurally computed (Ramp, Grid, Fractal...). Textures have a relationship to the surface they are applied to through placement or projection nodes. In Maya there are several subsets of textures.

2D textures; which are applied across a surface based on the surface's paramateric coordinates (NURBS) or arrangement of texture vertices (Polygons). E.g. File textures, Ramps, Grids, Fractal2d...

3d textures; which exist in thier own 3 dimensional space as defined by the 3D texture placement node. This creates the appearance of textures that flow through an object like veins in marble. E.g. Cloud, marble, wood...

Environmental textures; simulate three-dimensional spaces using a series of image files - E.g. Env Ball, Env Cube, Env Sphere. Env Chrome and Env Sky are procedural textures that do not rely on file textures or images. These textures are almost exclusively for use as reflection maps.

The Layered Texture is a node that lets you combine textures using standard compositing conventions such as add, subtract, multiply, over, in... etc.

See also Material, Texture Space, UV Space.


Texture Space

Texture space is the set of 2D coordinates, expressed in U and V directions within a given surface, that define how a texture will be applied to a geometric object. U axis runs across the surface in one direction and V runs perpendicular to U. Texture space can be described in a coordinate fashion such as the Ucoordinate and Vcoordinate. If the texture space is being described within a range of 0 to 1 it is considered normalized UV or normalized texture space.

See also UV Space, Parameterization and Texture.


Time Node

When an animation is playing back, Maya keeps track of the flow of time during playback. This is handled by the time node. Every new scene in Maya has a time node in it. The output of the time node (outTime) is actually the frame number that Maya is currently evaluating during playback, so it is not a measurement of seconds unless you divide that output by the current frames per second setting you are using.

If desired, you can create your own time node and can connect different objects to different time nodes for interesting mixed timing effects. Time nodes are also used for dynamics evaluation and all expressions in Maya get connected to a time node by default so the expression is evaluated on each frame of an animation.

See Also: Node, WorldMatrix


UV Space Back to Top

UV refers to the surface coordinate system used to describe a surface or range of values associated with a geometric surface. With NURBS geometry a point on the surface can be described by the two values associated in the U and V directions. Much like longitude and latitude are used on a map to describe a single point. Texturing also uses this coordinate system to align the mapping of the texture to the geometric surface.

See also parameterization, and Texture Space.


Value: See HSV Back to Top


Variables

In the context of MEL or Expressions, a variable is a symbolic name that stands for a modifiable value that is held in memory. In MEL (Maya Embedded Language) variables begin with a dollar sign $. Variable names cannot include spaces or special characters. Variables cannot begin with a number but can contain underscores and numbers. Variable names are case sensitive.

Examples of Variables and variable types:

    int $myIntegerNumber = 6;

int is an integer type that can only hold non-decimal numeric values (...-2, -1, 0, 1, 2...)

    float $myFloatNumber = 6.0661;

float is a floating decimal variable type that holds fractional numeric values (-456.54857, 2.01, 45.4539)

    string $myStringOfCharacters = "print this text";

string is a variable type that only hold character strings ("A string of letters", "abc" )

    vector $myVectorValues = <<3.33, -4, 900.112>>;

vector is a variable type that carries three float values. This variable type is useful for holding 3 related values that can be referenced in one place such as color <<r,g,b>>, position <<x,y,z>>.

Array variables

    int $myIntArray[] = {5,3,11};

Arrays can be of type int, float, string, vector and are designated with the open and close brackets []. An array is a list of values that are indexed. To recall an individual member of the array use its index:

    $myIntArray[1] = 3; Array indeces are zero based, that is they start with 0 and count up.

A matrix is a two dimensional array. These are rarely used in MEL.

    matrix $myMatrixValues[2][3] = <<1,1,2;2,2,-1.01>>;

see also Vector, Matrix, and Attribute.


Vector

In Maya a vector is a type of data consisting of 3 distinct numerical components. For example, the vector defined by :

    vector $hello = <<5, 7, 9>>;

is comprised of the three individual components 5, 7, 9. The first component is called the X component, the second the Y component and the third the Z component. So, in general:

    vector $variableName = <<x, y, z>>;

In Maya there are many quantities that lend themselves to this type of storage. For example, you may want to refer to the translation, rotation or scale of an object. Or, perhaps the RGB color or the velocity of a particle. Rather than always carrying 3 separate pieces of information around, you can organize that data easier by storing it as a vector. This vector is then a single entity that represents all 3 pieces of information. You can access each individual component in the following way:

    $firstComponent = $hello.x;
    $secondComponent = $hello.y;
    $thirdComponent = $hello.z;

In this example, a vector variable is shown. Attributes can also be created that are of type vector. The most common vector attributes are many particle attributes such as position, acceleration, velocity, rgbPP, and many others The preceding examples are actually called vector arrays since they are lists of vectors.

See Also: Variables, Matrix


Vector Product Node

    Operations
    Dot Product
    Cross Product
    Vector Matrix Product
    Point Matrix Product

The Vector Product utility node lets you multiply a vector by another vector in several different ways. The Vector Product node has three parts; two input attributes, an operator that is applied to the two input attributes, and an output attribute for holding the result of the operation.

This node uses standard vector/matrix mathematics. Say we have two input vectors, (a,b,c) and (d,e,f), and we are calculating the output vector (x, y, z). The calculations are defined as follows:

    Dot Product

    The Dot Product (also called scalar product) is useful for comparing the direction of two vectors. If you turn on "Normalize Output", the Dot Product is actually the cosine of the angle between the two vectors. A value of 1.0 means the vectors point the same direction. A value of 0 means that they are at right angles to each other. And a value of -1 means that the vectors point in opposite directions. For example, this information can be useful for controlling how a texture is applied to a surface with respect to that surface's proximity to a light source. Dot products are also used internally in Maya to control backface culling, drawing only polygons whose normals are facing towards the camera.

    A Dot Product is defined mathematically as follows:

      Dot Product = (a*d) + (b*e) + (c*f)

    A dot product is a single value, so all three output values x, y and z will be set to the same thing.

    Cross Product

    The Cross Product (also called vector product) of two vectors gives you a new vector. This new vector is guaranteed to be perpendicular (i.e. at right angles to) both of the input vectors. Taking the Cross Product of two vectors is often useful for defining coordinate systems given two vectors. Cross Product is defined mathematically as follows:

      x = (b*f)-(c*e)

      y = (c*d)-(a*f)

      z = (a*e)-(b*d)

Note: If you just want to do simple component-by-component combinations of your vectors (i.e., x = a*d, y=b*e, z=c*f) then you should use the Multiply Divide utility node instead of the Vector Product utility node.

    Vector Matrix Product

    The Vector Matrix Product is useful for taking a *vector* in one coordinate space and moving it to another. For example, if you have a vector in camera coordinate space, you can multiply it by the Xform Matrix attribute of the camera. That will give you a new vector in world coordinate space.

    Point Matrix Product

    Similarly, the Point Matrix Product is useful for taking a *point* in one coordinate space and moving it to another. For example, if you have a point in camera coordinate space, you can multiply it by the Xform Matrix attribute of the camera. That will give you a new point in world coordinate space.

    Given an input vector (a, b, c) and an input matrix:

      A B C D
      E F G H
      I J K L
      M N O P

    Then Vector Matrix Product is defined as follows:

      x = (a*A) + (b*B) + (c*C)
      y = (a*E) + (b*F) + (c*G)
      z = (a*I) + (b*J) + (c*K)

    And the Point Matrix Product is defined as follows:

      x = (a*A) + (b*B) + (c*C) + D
      y = (a*E) + (b*F) + (c*G) + H
      z = (a*I) + (b*J) + (c*K) + L

    See Also Vector, Matrix.


WorldMatrix and WorldInverseMatrix Back to Top

In Maya, objects can exist in either local or world space. Sometimes it is useful or necessary for Maya to convert an object's transformation information (translate, rotate, scale and potentially other attributes) from local space to world space or vice-versa. Maya handles this "toggling" between local and world space representation of an object via the use of the worldMatrix and worldInverseMatrix attributes. This shouldn't be confused with how you interact with an object when you translate it in the interface. Instead, these attributes provide a method for Maya to represent an object's position in local or global space.

The worldMatrix attribute is a 4x4 transformation matrix that Maya computes internally to determine how to represent the object in worldSpace. To toggle back to local space Maya uses the worldInverseMatrix attribute. This information can be captured and modified in advanced cases but is more commonly something that Maya handles internally or that you may see occurring automatically in Dependency Graph node connections so the information flowing between nodes is the "correct" type of information for the nodes to process. Typing the lines below in the Script Editor will show you the output from the worldMatrix and worldInverseMatrix for an object in the scene named ball.

    getAttr ball.worldMatrix[0];
    getAttr ball.worldInverseMatrix[0];

See Also World Space, Matrix.


World Space Coordinates

Objects exist within a coordinate system in Maya. When an object is created, its position and orientation are its local space. When an object is created, its Local Space and World space are identical. Once you transform the object by translating, rotating or scaling it (or a number of other transforms), you have modified the object's World Space so that it is different from its original Local space. Some shading calculations work better in local space, while most should be done in World Space.

A B C D E F G H L M N O P R S T U V W



To learn more about how to use our software and become more proficient as a user, purchase any number of our learning tools.

Check out our Online Store!

Any updates made for this learning tool will be posted here.

Contact Us Terms and Conditions Back to Top