Process: Uncovering The Moral Machine

cover | Editorial | Process Work
crispr circuit final layout

A 8.5 x 11” editorial cover illustrating the research of MIT’s Moral Machine Group.

Client

Prof. Marc Dryer

Tools

Autodesk Maya
Adobe Photoshop

 

Creative Process

“During an impending crash, would you rather run over the baby or the grandma? Or, should you crash the car, saving the pedestrians but killing the driver?” These questions, or rather the real and uncomfortable human responses, will drive future A.I. learning and development. The illustration, Coding a Moral Machine, invites audiences to ponder these questions.

Part of coursework for MSC2016H (Visualization Methods ): a 8.5×11 editorial cover to engage a popular audience on an emerging scientific topic.

3D Modeling

The chess pieces incorporate basic character modeling, sculpting, and vehicle modeling techniques

3D Simulation with MASH

Low-poly geometries were distributed onto the hand mesh to enhance the appearance of the wireframe.

Rigging

A IK/FK rig was applied to the hand model to achieve dynamic finger positions

Editorial Design

Content research, composition sketch in photoshop; lighting, texturing, render using Arnold.

 

 

Would you rather crash into a baby or a grandma? As we brace our roads for a future dominated by autonomous driving, this is the kind of “would you rather” the AI overlords (I meant) protectors will play.

You may already be familiar with the infamous “trolley problem” from memes or games: A trolley speeds down the tracks, about to kill five people tied to the tracks. You, a lever in hand, may choose to switch the trolley to a different track where only one person is tied to the track. Should you pull the lever to kill one and spare five? With the rise of smart cars, maybe a more practical question for manufacturers and designers would be, would your customer buy a car that could potentially kill five pedestrians to save one driver?

 

Here is one PR answer to the question above: consumers will trust cars that best mimics human decision making. To train AIs to think like humans, the Moral Machine group from MIT created a survey platform that crowdsources people’s decisions on different variations of the trolley problem. Just like humans acquiring moral principles by interacting with other humans, an AI may learn what is ethical from processing real peoples’ answers to the Moral Machine survey. A computational model first breaks down how a human mind processes moral dilemmas using the survey data, then passes the steps humans use to arrive at a decision in a moral dilemma to the AI. From there, the model evaluates how different an individual response is from the rest of the responses from a group (e.g., nationality, culture, physical location etc.), and infers the groups’ norms in a moral dilemma. The outcome is hopefully, an AI that makes the most socially optimal choices.

 

This was the idea behind this cover, and at the final concept of the editorial cover took no time at all.

 

––– Only if that were true. When I first came across the Moral Machine Group’s research, the initial concept was more along the lines of, “AI-vs-human-what-a-complicated-issue”: Two roads twist into knots, one traveled by cars, one by humans. I have to admit that this concept was more utilitarian than creative. For my first Maya assignment, I wanted to model something that’s less organic and more geometric. Plus, I thought animating the cars moving on a Möbius strip would be pretty neat to look at.

 

 

 

During in-class critique, the general feedback I received was to focus more on the “moral choice”. Marc wanted us to make sure that we have a stellar still before turning it into motion graphics. Many people also suggested me to create a view from inside the car, to emphasize that the choice comes from within the car with a moment’s notice – both for the human and for the computer program. So I did.

 

…… And I did not like the execution at all. Obvious issues such as the dreadful typography aside (sorry Scientific American, I promise I will use a media kit next time), the composition can be easily misread, i.e., it is not obvious enough that this is from the inside of a car. Besides, the moral choice underlines this concept did not come across for most people whom I first showed the sketch to. “Just hit the breaks so you don’t hit the cat and the kid!” was a common response.

 

“But look at the HUD! It says ‘collusion detected’ and you are travelling too fast to break!” I retorted.

 

This looks too busy, especially with the oncoming car.

 

“If I got rid of the oncoming car, how do I show that you can’t change lanes?”

 

Make it into a narrow road? Why didn’t the AI detect the kid and the cat in the first place anyway?”

 

“It’s a SHARP turn! Look at the road signs!”

 

Alas if you have to point these things out this to a reader, the battle was already lost. Time to come up with another idea.

 

 

I re-read the paper and realized that I have not addressed the key idea of the paper in both iterations. The paper’s focus was to teach machines how to think like us, not “what is it going to do in a moral dilemma”. After realizing this, arriving at the chessboard felt natural –– after all, zero-sum games like chess have always been a symbol of Turing tests since the conception of artificial intelligence. But this time we are not playing a zero-sum game with machines, the machines’ move is a reflection of ours.

 

 

Modeling the car was more tedious than I expected. Since the entire car only has one shader applied as a chess piece, you need to make the details readable using only the actual geometry of the car, rather than relying on UV mapping to create the illusion of structure. I added extra bevels around the doors and separating the body and the windshield. I was really tempted to model that narrow car billionaires drive (Tango Hybrid); but in the end, I settled for a more reasonable-looking generic smart car.

 

Making the hand Sci-fi and ethereal was another interesting challenge. We learned how to create the wireframe shader in class. The scene initially had no skydome light or area light, but the wireframe itself was an emissive material. The hand surface’s reflectance/metalness was set high so that it could reflect a light gradient. By the way, learning how to rig was easily my (unexpectedly) favorite part about this project –– who knew that naming your objects was actually useful?

 

 

 

I really liked how the wireframes looked, however when I imported the hand into the chessboard scene, the effect no longer works due to the skydome light I applied to the rest of the scene.

 

 

 

The remedy was surprisingly simple: unlinking the light sources to certain objects, so that the hand was unaffected by the skydome light and that the chessboard unaffected by the spotlight shining on the hand. This is not a very common maneuver when you light a scene, as removing the light’s interaction to certain objects can make the scene look fake. In my scene, I felt that the light emitted from the hand did not affect the nearby chess piece enough, so I placed an area light underneath the palm of the hand.

 

 

Although Marc had briefly introduced head modeling to the class using this tutorial, Autodesk’s Character Generator made my life much easier. The challenge with the characters was modeling low-poly cartoon hair, using nothing but extrude and multicast tool.

 

If your Master’s research project involves 3D modeling, Marc’s Maya Pt. I (MSC2016) is a required course. Besides all the Maya tricks, the process of conceptualizing a cover art can train your editorial senses quite a bit if you are willing to go the distance. You probably will redo many things multiple times, and in the end, something still may not be what you envisioned in the beginning (why didn’t I use Zbrush to use the chesspieces?) –– and that’s perfectly okay.

“Even the best strategy sometimes yields bad results—which is why computer scientists take care to distinguish between “process” and “outcome.” If you followed the best possible process, then you’ve done all you can, and you shouldn’t blame yourself if things didn’t go your way.”

Brian Christian,Algorithms to Live By: The Computer Science of Human Decisions

BMC Admission Portfolio

BMC Admission Portfolio

Portraits: The admission portfolio asks for Four examples of observational life drawing, including the human figure (clothed or nude), hands, feet or portraits. For this category, I included a commisioned digital sketch. The project came from a (then) new mom...

Stay in touch