Star-Lord Helmet

Behind the Scenes

Mthokozisi "Hap" Sibanda

Background

Inspired by Marvel's Guardians of the Galaxy, this experience features a helmet modelled in the 3D software Blender. We then import the model into the AR creation software, Meta Spark, where we implement visual effects and basic interactions.

Asset Preparation

We create the model in Blender, using a technique known as box-modelling. With this technique, we take primitive shapes (boxes, cylinders, spheres.etc), then we edit and combine these shapes to create the model. Box modelling works well for objects like helmets, which have distinct rigid shapes. To try ensure accuracy with the source material, we gather multiple reference images of the helmet as it appears in the Guardians of The Galaxy movies. These references help us understand how the helmet should look in different lighting conditions and from different angles.

Timelapse of box modelling the helmet

Once we complete the 3D model, we need to optimise it for mobile AR by reducing its "poly-count". 3D models are made up of many connected triangles called "polygons". Highly detailed models use many polygons and are appropriately called "high-poly" models. However, the higher the poly-count of the model, the more computing resources we need to display it. This makes high-poly models a poor choice for real-time applications like games and AR/VR apps, where resources need to be used wisely to ensure a smooth experience.

So, if we want to use our helmet in our AR experience, we need to "optimise" it by lowering the number of polygons it has while preserving as much detail as possible. There are many methods for reducing poly-count, however we choose to use a process called "decimation". When we decimate a model, we reduce its poly-count by automatically merging and collapsing its triangles. This can dramatically decrease the quality of the model if used carelessly. So, instead of decimating the entire model at once, we apply varying levels of decimation to different parts of the model. We focus on decimating in areas where the extra polygons don't contribute much to the overall shape. We also decimate more in areas where the user is unlikely to notice the loss in detail. This all results in the simplified "low-poly" model that we use in the AR experience.

Optimised "low-poly" model (left) vs Unoptimised "high-poly" model (right)

Side note: Decimation is a quick and cost-effective method that works well for our use case. However, if you need to use your model for complex animation (e.g dancing 3D characters), then you need to invest time into manually reducing the poly-count. This is to ensure that the model can still be animated properly.

Now that we have our "low-poly" model, we assign colours to its different parts. Then, we save the wireframe of our high-poly model into a glowing image texture. We use this texture later in Meta Spark as part of the dissolving visual effect.

Helmet with glowing wireframe (left) vs saved image texture of the glowing wireframe (right)

Implementing AR

We take our low-poly helmet model and bring it into the AR creation software, Meta Spark. We attach it to the user's head and implement controls for changing the size of the helmet. This is to accommodate different head sizes.

We also implement controls for making the mask appear/disappear when the user taps the screen. To implement the dissolving effect that is shown when this happens, we use a third-party asset by Valerii (@ne.zloy.ya on Instagram). This allows us to blend between our helmet's normal colours, complete transparency, and the glowing texture we saved earlier.

Try the experience yourself