Tate Johnson
11 min readNov 30, 2020

--

Future Hybrid Mixed Reality Design Studio Environments

11/24

I discussed with peers about their experiences working remotely, and also recalled previous conversations and personal experiences that I had with the hybrid studio experience, both this semester and last semester. Based on my conversations, I think the direction that I’d like to move forward in relates to the sharing and discussion of 3D physical work.

I used a spectrum method instead of specific personas, thinking about three different positions on a spectrum of access, ability, knowledge, and personality instead of three distinct separate personas. This helped me to better understand the varying positions of needs for the stakeholders of this project.

I relied heavily on experiences and conversations from our most recent project, the Miller ICA exhibit, to shape my project moving forward. I think that the issue of sharing 3D work in a studio context is really important and not addressed by current 2D solutions (Miro, Figma, images, etc.). There’s an experience of walking through a studio and picking up someone’s object or getting close and navigating a 3D object yourself. This independent exploration, a sort of after-hours or informal observation of 3D work isn’t captured by say, looking through 4 pictures of a model on a Medium post, or going to a Figma board and trying to piece together how this ‘thing’ actually is.

I think to accomplish this, I’m relying on the fact that photogrammetry software will be sufficiently advanced and ubiquitous that making a medium-fidelity 3D scan of a student’s work would take the same amount of time as taking those 3–5 pictures that an iteration update on Medium would usually take. Instead of images as updates in a blog format, users have sort of ‘file folders’ attached to them in a mixed reality interface, and other users can go through this interface (branching file structure maybe?) to view and interact with work and models. This makes the experience more equitable for those remote, and also allows those in the studio to interact with work in different ways.

Some of the possible interactions I was considering related to specifically the environments workflow but could be easily adapted to work with products students. I began to think about the interaction of picking up the pin on Google Maps and the transition from the overhead view to the first person street view. I think a similar transition could be effective for looking at a physical environment and then going to a first-person view within the space. This also leads to questions of how the future Hololens technology recognizes the viewing height of the user- and how it adapts content based on that. I’m interested to see based on the brief what I will consider as being invented by the time this technology is used, and what I will try to engineer based on current technologies.

12/1

I began to prototype and think about what is in this environment and what is afforded by the Hololens vs existing 2D digital technologies. Although I explored file management in the MR space, I think I’d like to be very succinct with what is MR and what it can do best- 3D objects. The more complex interactions that I can leave on the computer, the better because we already are so efficient with the mouse and keyboard interface instead of adding the cognitive load of floating text and signifiers in the mixed reality space that students would have to learn and adopt.

I experimented with current scanning technologies to understand some of the feasibility and limitations of the technologies that I was experimenting with for my proposal. The first set of images are from experimenting with photogrammetry scanning, but one of the disadvantages is that it has trouble with perfect planes/faces which a lot of environments models have. I also tried using the infrared sensor on my iPhone to create a model using their dot projector. Although this was more accurate in terms of the overall form, it doesn’t create a solid object but rather the particle cloud for each projected dot. I decided that for the demo that I will just use my exported Sketchup model from the first project, but I think that the 10-year projection for this project means that the hardware and software for accomplishing a very easy, fast, and detailed scan of a physical model isn’t a stretch. Based on how prevalent infrared scanners are becoming in phones and the improvement of photogrammetry software through machine learning, I think a student could scan and upload a model within a matter of minutes before a class critique.

Failed attempts at photogrammetry scans (issues with flat black surfaces)
Dot-matrix cloud map alongside the SketchUp model in Reality Composer / the infrared scan with colored points
Studio workspace

I documented my workspaces in the studio, my laptop, and my phone to understand the different tools and workflows that I use and how my MR concept fits into this process, and how the MR environment fits into the larger system.

Digital workspace
Mobile environment

I think that my concept exists as another tool that can be integrated with the future versions of current applications and platforms. I would like to keep what is currently successful in screen-based digital environments, and not try to unsuccessfully reinvent the efficient current interfaces in mixed reality. Some of the interactions I consider successful and that my environment can integrate with:

messaging/communication: Gmail, Messenger, SMS messages for text, Zoom, Google meet, Discord for voice/video

community: Discord, Slack, Canvas

file management: Box, Google Drive, Canvas

3D modeling: SketchUp, AutoCAD, Solidworks

My concept will exist in the gap between all of these offerings- a new way to share, discuss, and critique 3D work and create an equitable experience no matter the physical environment context around the different users. The appointment/call management and organization could be through discord. When users want to annotate pins or highlights on a model, they can type it using the advanced interface of a keyboard instead of speaking in potentially noise-sensitive physical environments. Annotations or a timeline of different pins, selections, could be shown in their preferred 3D modeling platform, as software like Fusion360 and Solidworks are already timeline-based modelers. Telepresence I predict will be incorporated into platforms like Discord, Facebook, Gmail, and Slack (which already have primitive telepresence currently). My concept exists as a tool environment and will leave most social aspects to the future version of the community platform that the class would use.

12/3

I made a storyboard for some of the initial interaction and the gesture controls as well as the functionality. I related a lot of my interaction elements to existing languages that users are familiar with. For dropping pins I looked at Google Maps Streetview and the drag and drop of a triangular element to create a pin. I used the control language from many 3D modeling and viewing programs to translate these into the MR space- Pan, Orbit, Scale, Move.

I used the Teachable Machine by Google to create a quick machine learning test for gesture controls. I explored the confidence with which it could differentiate the gestures to better understand how postures and relative movements create colloquial ‘gestures’. I think the significant knowledge is from the hand movement relevant to the shoulder and elbow points, as well as the hands relative to each other. The current Hololens has hand tracking so it understands gestures of ‘pinch’ and ‘tap’ etc., so I have confidence that these nuanced gestures will be understood.

I created a reality composer mockup with cones representing different viewers of a model, as I found that the cone forms represented directionality and location well so one can understand what another user’s point of view is.

I discussed the concept of ‘bring to me’ so that users could bring everyone to their perspective and relative position so they can share exactly what they’re seeing. I received some helpful feedback on the issues of vertigo and disorientation when involuntary perspective and scale shifts occur, so I won’t use this concept. I think having the cones represent the viewpoint is effective enough to show perspective, and users can either modify the model or physically move in their space to match someone else’s POV after a conversational cue like “come look at this”.

I went around to different classmates and asked them to demonstrate how they would interact with a model given the prompts of ‘orbit’ ‘scale’ and ‘drag’.

I also used resources about existing accepted gestures in the Hololens system to guide my gestures. It was also useful to see what they found successful and went on to implement instead of just going off of my more speculative gestures.

12/8

I created a more fleshed-out storyboard for some of the functionality and process of in-class, synchronous out-of-class, and asynchronous experiences. The different modalities afford different possibilities for how a user uses the environment.

I continue to develop and reconsider what is contained in this environment and how it exists within the larger studio and workflow system. I’m of the opinion that the mixed reality space should only be used for what it does best- three-dimensional objects and spaces. I think that trying to have floating buttons or other interfaces, as well as 2D elements, are better suited to the complex learned interfaces of a laptop, tablet, phone, or printed materials, instead of shoehorning the technology and trying to make the mixed reality concept too inclusive of other functionality.

I also laid out my presentation and considered the necessary components to tell the story of my specific context and solution.

I made another video storyboard with a more specific vision of the individual story that I’d like to tell, instead of some of the broader concepts that I had previously.

12/10

I explored using reality composer with green screen overlay to create realistic shots with a faked ‘timed’ interaction. I think I can use this method for the majority of my video to get a professional and realistic look.

I drew over some photos of my workspace to continue to storyboard the video and the interactions that I wanted to show. The text on the white is annotation for the storyboard, but wouldn’t be in the video.

As I reviewed the brief throughout my development process, I wanted to address the potential invasiveness of these all-day projection glasses or contacts. The ability for visual content to appear at any moment in your POV of 3D space through some sort of notification process is very dystopian and foreign to me. With current technology, you may get notifications on your phone, wearable, or laptop, but it doesn’t have an unexpected overlay atop your perspective of the space around you. To account for this, I made sure to show in my video that the thresholds for entering and exiting the MR space were similar to that of starting and ending a call, or opening and closing an application. The laptop serves as the interface to begin and end the space so a user has to go through an existing communication platform to organize synchronous calls or they can individually set an appointment. This also allows for file, participant, and hosting management to be done without intrusive UI in the mixed reality space. The threshold of going to a laptop or mobile platform, going to the launch application, selecting a 3D file, and choosing host options creates a consent process so users explicitly understand when the projected experience begins and ends.

I created a slide for my presentation that situates my concept within the current software spaces that environments designers use, to demonstrate the gap in coverage that prompted my concept development.

Information architecture overview

To create my final video, I used a green screen with Reality Composer to do double-takes of the primary overhead shot. I explored using After Effects for animating the 3D models but found it more time consuming than using the green screen and timing the Reality Composer behaviors to certain interactions. For 2D hand-tracked elements I used After Effects, but the 3D model is a screen capture of Reality Composer. I wish this project time frame was longer so I could explore higher fidelity video techniques, as I had to use a screen capture of Reality Composer running on my older iPad’s camera, so the video quality isn’t ideal.

I considered wearing glasses in the video to mimic a type of future Hololens but I worried about how performative or distracting it would be to wear out-of-place glasses. In a previous discussion during class, it came up that these could take the form of contact lenses or a less physically clunky form factor than full eyeglasses.

I also wish that I could’ve explored the 2D user interface integration, and what some of the speaker controls on the laptop would’ve been. Zoom offers a useful taskbar and I wonder how it would adapt to 3D space.

I went back and edited my persona spectrum to better conform to the project requirement of 3 distinct personas instead of my previous approach of 3 categories on a spectrum. It focuses on their situational context as it applies instead of more general sensibilities.

As I was watching the final presentations, I remembered something that Daphne had mentioned in a conversation. I asked a question about our previous project along the lines of “should I glue my foam interior walls down” and she responded something like “if we were all in the studio, Peter would be walking around your models and moving walls to try things out, so leave them free”. While I didn’t think of this as I developed my project, it serves as an example of the space that my concept exists in.

Unlisted

--

--