Kindra, the intelligent elevator assistant is designed to have a unique personality. She is rigid on enforcing a “polite” conversation with the user, such as insisiting the user to say “please” after a request.
I also tried to bring out the critical perspective of A.I. by folding humanity into Kindra’s script. e.g. “I’m covering shift for someone today.” In the plausible future, can A.I. decides if it wants to work and when it wants to work? What is convenience to a robot?
The dramatic ending is trying to make viewers uncomfortable in a way that raises questions. Is there really a maintenance robot that happen to have the same voice as Kindra? What role does design choices such as tone, pauses in conversational design plays here?
The project itself is successful in terms of raising critical questions. However, the voice of the robot could use more variations, such as changes in speed, tone and volume, which could bring out the intelligence aspect more. Also, the elevator asset could have been less sketchy because I don’t want viewers to feel fear from the settings. As suggested by my colleague Jom, it could be a luxurious, high-tech elevator that make you feel safe and protected in the beginning.
AWS Lex Lambda
“...create a project that takes a critical perspective of AI/Machine Learning by designing something that on the surface seems plausible and sensible, but on deeper analysis is useless, absurd, or just really off in some revealing way.”
“The term “useless” can be interpreted in a range of ways, but your project must take a position and dig into the challenges, affordances, unforeseen side-effects, and potential failures of artificial intelligence and machine learning. Your project must be grounded in insights drawn from actual machine learning experiments, but please take risks, and have a sense of criticality and humor.”
Philp van Allen