Process: Uncovering The Moral Machine

Process: Uncovering The Moral Machine

Would you rather crash into a baby or a grandma? As we brace our roads for a future dominated by autonomous driving, this is the kind of “would you rather” the AI overlords (I meant) protectors will play.

You may already be familiar with the infamous “trolley problem” from memes or games: A trolley speeds down the tracks, about to kill five people tied to the tracks. You, a lever in hand, may choose to switch the trolley to a different track where only one person is tied to the track. Should you pull the lever to kill one and spare five? With the rise of smart cars, maybe a more practical question for manufacturers and designers would be, would your customer buy a car that could potentially kill five pedestrians to save one driver?

Here is one PR answer to the question above: consumers will trust cars that best mimics human decision making. To train AIs to think like humans, the Moral Machine group from MIT created a survey platform that crowdsources people’s decisions on different variations of the trolley problem. Just like humans acquiring moral principles by interacting with other humans, an AI may learn what is ethical from processing real peoples’ answers to the Moral Machine survey. A computational model first breaks down how a human mind processes moral dilemmas using the survey data, then passes the steps humans use to arrive at a decision in a moral dilemma to the AI. From there, the model evaluates how different an individual response is from the rest of the responses from a group (e.g., nationality, culture, physical location etc.), and infers the groups’ norms in a moral dilemma. The outcome is hopefully, an AI that makes the most socially optimal choices.

This was the idea behind this cover, and at the final concept of the editorial cover took no time at all.

––– Only if that were true. When I first came across the Moral Machine Group’s research, the initial concept was more along the lines of, “AI-vs-human-what-a-complicated-issue”: Two roads twist into knots, one traveled by cars, one by humans. I have to admit that this concept was more utilitarian than creative. For my first Maya assignment, I wanted to model something that’s less organic and more geometric. Plus, I thought animating the cars moving on a Möbius strip would be pretty neat to look at.

During in-class critique, the general feedback I received was to focus more on the “moral choice”. Marc wanted us to make sure that we have a stellar still before turning it into motion graphics. Many people also suggested me to create a view from inside the car, to emphasize that the choice comes from within the car with a moment’s notice – both for the human and for the computer program. So I did:

…… And I did not like the execution at all. Obvious issues such as the dreadful typography aside (sorry Scientific American, I promise I will use a media kit next time), the composition can be easily misread, i.e., it is not obvious enough that this is from the inside of a car. Besides, the moral choice underlines this concept did not come across for most people whom I first showed the sketch to. “Just hit the breaks so you don’t hit the cat and the kid!” was a common response.

“But look at the HUD! It says ‘collusion detected’ and you are travelling too fast to break!” I retorted.

This looks too busy, especially with the oncoming car.” 

“If I got rid of the oncoming car, how do I show that you can’t change lanes?”

Make it into a narrow road? Why didn’t the AI detect the kid and the cat in the first place anyway?”

“It’s a SHARP turn! Look at the road signs!”

Alas if you have to point these things out this to a reader, the battle was already lost. Time to come up with another idea.

third comp sketch

I re-read the paper and realized that I have not addressed the key idea of the paper in both iterations. The paper’s focus was to teach machines how to think like us, not “what is it going to do in a moral dilemma”. After realizing this, arriving at the chessboard felt natural –– after all, zero-sum games like chess have always been a symbol of Turing tests since the conception of artificial intelligence. But this time we are not playing a zero-sum game with machines, the machines’ move is a reflection of ours.

Modeling the car was more tedious than I expected. Since the entire car only has one shader applied as a chess piece, you need to make the details readable using only the actual geometry of the car, rather than relying on UV mapping to create the illusion of structure. I added extra bevels around the doors and separating the body and the windshield. I was really tempted to model that narrow car billionaires drive (Tango Hybrid); but in the end, I settled for a more reasonable-looking generic smart car.

Making the hand Sci-fi and ethereal was another interesting challenge. We learned how to create the wireframe shader in class. The scene initially had no skydome light or area light, but the wireframe itself was an emissive material. The hand surface’s reflectance/metalness was set high so that it could reflect a light gradient. By the way, learning how to rig was easily my (unexpectedly) favorite part about this project –– who knew that naming your objects was actually useful?

I really liked how the wireframes looked, however when I imported the hand into the chessboard scene, the effect no longer works due to the skydome light I applied to the rest of the scene.

The remedy was surprisingly simple: unlinking the light sources to certain objects, so that the hand was unaffected by the skydome light and that the chessboard unaffected by the spotlight shining on the hand. This is not a very common maneuver when you light a scene, as removing the light’s interaction to certain objects can make the scene look fake. In my scene, I felt that the light emitted from the hand did not affect the nearby chess piece enough, so I placed an area light underneath the palm of the hand.

Although Marc had briefly introduced head modeling to the class using this tutorial, Autodesk’s Character Generator made my life much easier. The challenge with the characters was modeling low-poly cartoon hair, using nothing but extrude and multicast tool.

If your Master’s research project involves 3D modeling, Marc’s Maya Pt. I (MSC2016) is a required course. Besides all the Maya tricks, the process of conceptualizing a cover art can train your editorial senses quite a bit if you are willing to go the distance. You probably will redo many things multiple times, and in the end, something still may not be what you envisioned in the beginning (why didn’t I use Zbrush to use the chesspieces?) –– and that’s perfectly okay.

“Even the best strategy sometimes yields bad results—which is why computer scientists take care to distinguish between “process” and “outcome.” If you followed the best possible process, then you’ve done all you can, and you shouldn’t blame yourself if things didn’t go your way.”  

Brian Christian,Algorithms to Live By: The Computer Science of Human Decisions
Reflection: Defending The Stuck Baby

Reflection: Defending The Stuck Baby

As a medical-legal illustrator, you are in charge of providing communication tools that describe either personal injury or malpractice cases with clarity and impact. The visuals you create could support a medical expert’s testimony at trial, help the litigation team explaining the complex technical details, or educate the judge and jury on the medical interventions involved in a case.

Leila will mention “co-design” too many times in the course of 12 weeks. And believe me, you WILL codesign the S*** of this project — to the point where you would ask your partner to “hold down ‘alt’ for me while I rescale this thing” because your drawing tablet is way too big and you guys are sitting way to close (this may or may not have actually happened). In all seriousness, medical-legal benefits immensely from teamwork. Here’s a little preview at a late-stage team-work process:

So why and how you break the work down? Every case is unique (Leila has a ton of cases to choose from*), and everyone’s workflow is different, but for our case:

Stage 1: Working as a team makes sure that your visualization is as defensible as possible. Making your visualization defensible means that 1) an expert should be able to back up every aspect of your visualization with a medical fact, and 2) your visuals does not work against your client.

Here’s a quick summary of our case: Pregnant with a large baby, the plaintiff was under the care of the defendant, a family physician. During the final stage of the delivery, the baby’s shoulder became lodged in the mother’s pelvis, a condition known as shoulder dystocia. To deliver the baby, the baby’s arm was broken. The baby was also asphyxiated during the onset of shoulder dystocia, and the asphyxiation eventually lead the baby to develop cerebral palsy. The plaintiff says that the damage to the baby was caused by the defendants’ negligence. They question if our defendant should have anticipated shoulder dystocia and whether she should have consulted with another specialist before the final stage of the delivery.

Our job was to help to defend the defendant. To the plaintiff’s claims, our client says that her management of the labor and delivery was within the standard of care; the specialist’s attendance at the delivery would not have changed the outcome of the delivery.

We began our co-design by reading the case together and mapping out the necessary events we should illustrate, as well as the best ways to organize these illustrations:

We decided on the format of an illustrated timeline. It is the most intuitive way to help the audience understand how long and difficult the labor was, and how unexpected and sudden shoulder dystocia onset is.

Remember, our illustrations must show that the defendant has adhered to the standard of care. This means that instead of focusing on the breaking of the baby’s arm, i.e., the outcome of the delivery, we focus on the onset of shoulder dystocia. We use the sagittal view to educate the audience on what shoulder dystocia is (aka, this baby is BIG and he’s REALLY stuck), and a doctor’s view to help the audience to understand how difficult predicting and relieving shoulder dystocia is from the defendant’s perspective.

Stage 2: Now that you have the overall picture, distribute the work in a way so that everyone becomes a content expert on something.

We each are in charge of a key panel. Because the visualizations need to stand up to high levels of scrutiny, we act as each other’s content reviewers. The initial task assignment might have been C’s in charge of the first panel, A the second, and T the third, by the end it evolved to C knew a lot about the angle of the pelvis, A was the go-to for notochord positioning, and I drew a lot of pelvis.

Stage 3: Be stylish, be consistent.

From the sketches above, you can clearly see that my partners and I have a very different style of rendering. This means that once we know what’s going to be on our final board, we need to make sure the final product does not look like it came from three different hands. So first, everybody swore an oath on the collective maintenance of the sanctity that is the style sheet:

We of course, test-drove the stylesheet on a full-scale board:

The next was to render the actual panels. This step started with A taking on inking all of the final sketches, C masking all the render areas, and T adding the overall shading and tones.

I wish our work ended at my render. Alas, from the workflow video you would see that the first complete color render was passed back and forth many times between the partners. One major strength of our visualization was the continuity of the storytelling, and the dynamic baby positions used. However, this also means that we cannot reuse most of the assets —— i.e., we must render a new baby each time the baby switches positions across the board. To enforce consistency, we made sure that everyone had a hand in the final render, such as A refined the outlines of uterus, C added crosshatching to address hue variations in some of the shadowed areas, and A and T together adjusted the hue of the baby so that the hues final renders better complement that of the timeline.

Stage 4: Money matters.

Overall, I was happy with our final product. Compare to the other teams, our storytelling is definitely more linear and minimal, as we removed many information visualization elements such as the icons illustrating labor progression and event color coding. However, I think the storytelling benefitted from this simplicity and the timeline helps to put the medical terminology into the context of reaction time, thus enhanced our litigators’ argument. Also, did I mention that we stayed within our client’s budget? That’s right, one of the restrictions Leila gave to all the codeisgn team was a max budget of $4500, or $1500 per panel (or per artist) —— though I must admit that for a student project, this apparently-generous budget affected our design decisions MUCH less than it would have o a professional one.

Test Render: Wiring A CRISPR Logic Gate

Test Render: Wiring A CRISPR Logic Gate

In 2017, I met some very cool people from the Toronto iGem team. In their entry project, they wanted to fine-tune the CRISPR-Cas9 system via a light-sensitive switch. This is where I first learned how you can turn a molecular network into logical functional units. As I researched for a molecular topic to illustrate for Derek’s molecular vis course, I read about how researchers have constructed the largest living circuits in yeast cells. The coolest part of the biggest yeast genetic circuit was that you only need a single type of logic gate, i.e., a single transcription cascade, to compute multiple logic functions and produce multiple genetic products to control how a cell functions.

Screen Shot 2018-07-18 at 9.10.54 PM

The early draft focused more on the individual elements of the circuit. For example, to complete the circuit, you must keep the output of the circuit, a strand of guide RNA, inside the nucleus. By nature, RNA wants to exit the nucleus. How do we keep that RNA inside the nucleus? You flank the RNA export signal with two self-cleaving ribozymes. As the ribozymes cut themselves free from the guide RNA, they take the export signals with them leaving the guide RNA to roam inside the nucleus until it’s picked up by its Cas9 counterpart.

CRISPR Circuit draft

––– Spoilers, the ribozyme part did not make it into the final draft, neither did the ominous looking polymerase. After a few rounds of peer review, everyone points out that I need to include the application of the molecular gates to complete the narrative. The focus of this piece is not the molecular construct, but the possibility it inspires. 

Making a molecule inside a cell has become pretty standard. But with advanced synthetic tools like CRISPR, we could add some logic to this production process. Say, let’s only make that molecule during special circumstances. I decided to describe the gene circuit in the context of cancer. The gene circuit detects cancer using some logic; such as, am I in a cancer cell, or am I in a normal cell? If the cell is in a cancer cell, we don’t make the molecule. If we are in a normal cell, we will make that molecule that kills the cell. 

Coming up with a scenario got me very excited. Inspired the initial 8-bit font choice, I changed all the schematic elements into 8-bit representations after looking up tutorials on creating pixel art (WARNING, this is addictive).

8-bit polymerase, Cas9, cancer cell and normal cell

The graphic itself is a composite of several renders. The models are retrieved from the protein data bank by the epmv plugin, and rendered using Maxon Cinema 4D. The long DNA strand is taken from DNA strands associated with the protein themselves, and short segments I built using epmv.

Early drafts and composites for the CRISPR circuit

If you have a molecular biology or biochemistry background, Derek’s course will definitely be the highlight of your semester. The amount of tools you learn within 5 weeks is INSANE.

Sketches: Journeying to the Brain Underbelly

Sketches: Journeying to the Brain Underbelly

Boy oh boy. This is going to be a long one.

It all started with a seemingly simple neuro assignment: depicting the cerebrum or the brainstem from two perspectives. Looking at the project description, I thought, what if you can do everything the project asks for in ONE illustration? 

When the weather is cold and slimy, a walk to UofT’s anatomy museum just feels far. So instead, I stayed cooped up in our Mississauga workspace and stared at some models:

SOMSO Brain model for brainstem sketch
Ventral lateral view of the diecephalon sketch
Ventral lateral view of the diecephalon sketch

I was happy with the concept and sketch, then I proceeded to the photoshop render. Comes the deadline of the assignment and this is what I submitted:

Spring critique rolls around. The more I looked at what I handed in, the more I wanted to change it. I had a better idea of how everything fits together after a semester of neuro. The direction of the corona radiata looked weird? Change it. The cerebral peduncles looked pasted on and modular? Change it. The general shading of the cortex looked too schematic? Change it. Emphasize the shading of the orbit. Clarify the structures that have been removed. 

MSC2012: Ventral Lateral view of the Diencephalon
Second re-revision of the brainstem drawing. The diencephalon focusing on the brainstem and the cranial nerves. Sketched and painted in Photoshop.

It was the night before the spring critique. I was reasonably content with the progress of the illustration. “Interesting. I’ve never seen a brain in this orientation before.” Someone commented. “Something is still off though, I need to look at more references.”

“The way you rendered the white fiber around the thalamus is interesting. Why not do the same thing for the cortex?” Another suggested.

“The optic chiasm looks weird. So does the cross-section.”

“You definitely need to look at more actual specimens.”

I went back to Grant’s museum and focused on specimens that showed how the diencephalon is connected to the cortex. On the model, you see a vacant space for where the 4th ventricle is –– and it looks awkward as hell when you add some depth shading to it. Looking at the sketches of the human specimens, I decided that I can either render the entire internal capsule or render the basal ganglia overlapping the thalamus to avoid showing that awkward, sans-cerebral-spinal-fluid black gap.

Third round of diencephalon sketch made at the Grant's museum.
The third round of diencephalon sketch made at Grant’s museum.

I also made maquettes with sculpey polymer clay to help me draw and shade the caudate nucleus (RIP Sculpey rose). 

This current render is the closest to what I had envisioned while sketching the first draft. Could it be better? Absolutely. But, for now, I’m letting my brain rest from looking at brains. 

Process: OR Adventures

Process: OR Adventures

Thou shalt not utter the words “surgical illustration” around thy BMC peers without proper trigger warnings. However, just like any other challenging assignments you have completed, surgical illustration (“surgery” in short), will not kill you but make you stronger.

(** This post contains graphic surgery pictures. Viewer discretion is advised.**)

Love it or not, surgery will definitely be one of the most memorable projects you complete on your BMC journey. Michael takes great care in splitting the class into pairs, matching ones’ work style, lair location, and sometimes height* to another. The result is hopefully, a partnership that carries both of you through the most perplexing surgical literature, the most intimidating meeting with your surgeon, or the most brutal roast from Michael.

*: Prof. Michael Corrin admits that sometimes he would match a tall student with a short student, ensuring maximum visibility of the surgical field for both parties.

My partner M and I worked with Dr. Amr Elmaraghy, an orthopedic specialist from St. Joseph’s Health Centre. We received our appointment in early December, contacted St. Joseph’s OR staff mid-January (make sure to update your flu shots during the holidays young Jedi!), and scheduled our observership just before February. To prep us for surgery, Dr. Elm sent us three studies on the topics of pec major repair the week before OR day, which he co-authored.

We met Dr. Elm three hours before the scheduled surgery, and discussed the main goals of the illustration as well as the scope of the project. One knowledge gap Dr. Elm wanted us to address was the architecture of the pec and how pec tendon inserts on the humerus. Previous literature and anatomy atlas usually depict the layers of pec tendon “twist” onto the humerus, which Dr. Elm’s team found to be inaccurate. Instead of twisting, imagine each segment of pec major stacks on top of each other, pushing into a fan shape anchoring around the humerus like a deck of cards:

As useful as OR sketching is, having a camera (with the surgical team’s permission) and know what pictures to take won’t hurt. This is when having a partner becomes handy –– M was much more efficient at sketching, and I was more at ease using the Sony a6000. An hour later, M had captured most of the essential visual information, whereas I took 8 GB worth of photos. Between all the OR circling and lens switching, both of us also jotted down as much Dr. Elm’s live commentary as we could: “You see that white thing? That’s the Sternal head.” “I need a Hohmann retractor now!” Though not a focus of the procedure, I found taking pictures of the instrument table helped us a lot. I also deliberately took a lot of photographs of the surgeons’ hand motions. One may find a ton of reference pictures on the tools from manufacturers such as Athrex easily, but the chance for you to develop a visual record of how the surgeons interact with those tools doesn’t come by every day.


Based on our notes, M and I compiled a sequential report. For things that we missed, we consulted Dr. Elm’s old X-rays, surgery videos that he recorded himself, as well as demo videos recorded by the manufacturer. Since the surgery itself was relatively short, we broke down the procedure into mere seven major steps:

  1. Patient orientation and pathology.
  2. Locating the tear and making the first incision.
  3. Mobilizing the torn tendon.
  4. Suturing the tendon.
  5. Preparation for button insertion.
  6. Burron insertion.
  7. Advancing the torn to the humerus.

The next two months we fell into a routine: making the sketches at the beginning of the week –> bringing our sketches for Michael to critique end of the week (where Michael pointed out amateur mistakes such as “not enough tension” one figure, the tools are “not straight enough” in another) –> redrawing everything. Draw, meet, redraw, repeat. The pattern repeats itself until Michael or your partner had no major complaints about your sketches, resulted from your actual improvement, as well as their collective exhaustion. You can see the process for figure 1 here:


During this iterative process I changed my workflow in two major ways: 1) the complete abandonment of sketching traditionally. and 2) Maquette. Maquette. Maquette. For example, this is how my sketches looked at in the beginning:

And this how my sketches looked like a month later when I sketched everything with photoshop:

The reason for the switch was mainly due to considerations of surgical storytelling. Efficient depictions of surgical steps often require layering. The perspective is kept constant; by removing layers of anomy sequentially, the chronology of the events can be compressed comprehensively in 2D space. Traditionally, the sketches are done on translucent paper with graphite pencil, where each new panel is reassembled on a fresh sheet and registered by aligning pre-marked crosshairs. Sketching in photoshop not only eliminated an extra digitization step (which means extra traveling and queuing to use the studio flatbed scanner), it also makes sure that the depiction the constant elements in the panel stay consistent.

Using and building maquettes are essential skills for a medical illustrator. We learned how to model surgical tools in class using C4D, though depending on your need for the illustration, there is no need for making an elaborate maquette if all you wanted was to get the angle of your pec button inserter right:

It’s easy to get caught up in C4D, and this is why you need a good partner like M to keep you grounded.

I struggled to understand where and how the Krackow stitches were made on the torn tendon. Dr. Elm explained to us that the surgeon would start the suture going from medial to lateral. This creates interlocking loops on the lateral edge of the tendon, where the free ends of the sutures exit laterally to be pulled onto the bone.

I hope the above description is as confusing for you as it was for me. Though I took a decent shot of the tendon’s sutures, the tongue-like red blob did not help much. With the C4D window still open, my first instinct was to YouTube a tutorial “how to model thread”.

I was saved from going down another tutorial wormhole by M’s craftiness and good sense ––  homegirl made a maquette with yarn and socks for us:

The next stage of the illustration was to have your comp sketches reviewed by your surgeon. After receiving Dr. Elm’s approval, one last step remains –– inking.

Determined to keep an all-digital workflow, I “inked” all of my comp sketches in Illustrator:


Surgical illustration is probably the biggest project you take on as a BMC firstie. Looking back, I’m surprised by how simple the final product looks, though the process sure was not simple. It’s a great project to brush up your technical skills, both in traditional pen and ink and 3D modeling. If you come prepared for your weekly meetings and actually take time refining the details he pointed out, you will receive a ton of support from your instructor and will be amazed in the end that, you came out of the experience safe and sound after all. It really wasn’t that bad.

See my partner M’s illustrations here.

Test Render: Unraveling the Schwann Cell

Test Render: Unraveling the Schwann Cell

This project comes after the pelvis Zbrush sculpt, where we reconstructed skeletal structures from DICOM data. This is our first 3D project in BMC where we sculpted organic objects from scratch. My classmates really showed off their newly gained sculpting skills on this one –– but I decided to do mine entirely in Cinema 4D. I was absolutely fascinated by the power of deformers in C4D, how a beautiful, organic shape was just a few parameter changes away. Take the example of the schwann cell, it’s simply a bend deformer on a very long cube.

Radivoj V. Krstic’s Microscopic Anatomy was the savior to many in the class looking to render a cellular subject. In my concept sketch, I decided to include two different cross-sections of the Schwann cell, highlighting the bulge at either end of the node of Ranvier, and a layer of the basal lamina.

Schwann Cell Render Concept Sketch
Schwann Cell Render Concept Sketch

When it comes to the lighting of the scene, I initially set up the scene in a way to depict a relatively tranquil cellular environment. However, upon stumbling upon the stunning micrography by @nerdcandy , I completely changed my mind about the lighting setup.

Macrophage and axons
Macrophage and axons
Schwann cell interaction with macrophage during Wallerian degeneration
Schwann cell interaction with macrophage during Wallerian degeneration

The ease of animation is one reason why I wanted to do everything in C4D. I’m still working on the final animation, but you can see a quick clip here: