projectquhere
projectquhere
Paul
20 posts
The Process of a Senior Project
Don't wanna be here? Send us removal request.
projectquhere ¡ 8 years ago
Video
vimeo
I have posted about Requeerium as inspiration for my senior project on my blog before. This is the project’s “Inspirational Cut” documentation, and I believe it nicely captures the essence of the project.
Parts of the documentation that particularly stand out are the use of detailed testimony, footage from the official demo, background music, use of a poster (to advertise the project and to remind viewers of the documentation what the project is called), a large number of participants, and the video duration.
Using microphones to obtain clear audio of testimony after users entered and exited Requeerium allowed the team to smoothly add it to the footage, and using detailed testimony made the documentation feel more personal than it would have if shorter testimony came from a large number of users. The background music filled in the noise and made the two minute video sustainable - without it, I think two minutes is too long for documentation. The poster (visible at the start of the project and 29 second in) reminds viewers the name of the project, what it is about, and the project creators. If viewers of the documentation were not listening to the video, they could still understand the project by looking at the sign, which was less intrusive than a title or text overlay. Additionally, the large number of participants captured in the footage made the experience seem more genuine. Seeing the same five participants interacting with the environment for the whole video would be boring and give the impression that it was only demoed once. I would have softened the transition out of testimony audio to more seamlessly integrate the users’ voices into the documentation, but it was not a huge issue. What WAS a huge issue to me, however, is I was never notified when this video game out, and I was one of the users who spoke about my experience. I was not told that I would be informed when the video came out, but when I searched for Requeerium’s documentation when I did my first post about it over a month ago, I was shocked to hear my own testimony. It was very personal, and I would have appreciated if the team let me know they were using my thoughts about the experience as part of their documentation.
I also wish they did not include the rambling second portion of my testimony. It was word vomit! It was pedantic! As I commented in my earlier post, I was overwhelmed with emotion, and I did not think before speaking. That’s fine - it was an organic, raw account of my experience - but it was messy.
0 notes
projectquhere ¡ 8 years ago
Video
youtube
This is the prototype of quHere, my senior project in digital media, I presented at the Senior Project class’ midterm demo session. 
QuHere is a virtual reality experience usable in Google Cardboard, as previously explained on this process blog. While this video was recorded on my laptop, the current build works using a laptop and a mouse or on any android device within Google Cardboard.
I managed to get the camera working with gaze input in a way that is comfortable for users in VR. Additionally, the cardboard button (or mouse, in this video) is used to trigger events. When pressed while the gaze is centered on an object (which is highlighted blue when stared at), two buttons appear. The circular button changes the object in that location, and the square button moves the player near that object.
I’m very proud to be at this stage - as far as I’m concerned, quHere has its first functioning prototype. There are four ways in which I aim to improve it, however.
1) Quantity and quality of content
- The mechanics behind the experience work. Now I need to make more 3D models and focus on a consistent aesthetic for more personalization options and to make the experience feel more realistic despite being low-poly.
2) Polish
- Low-poly particle effects, sounds, and glow effects would make the experience less jarring. Making the room look more lived-in (better floor texture, a baseboard, more objects, windows, etc.) could also make the experience better.
- Real buttons not only need to be implemented, but they need to be positioned better. I have a basic script to make them always face the player, but that matters little when they move through walls.
3) Test, test, test!
-Testing will allow me to best understand the users’ desire for movement. I’ve been advised to use jump cuts because movement on rails can cause motion sickness. I want to test whether these cuts are good and to see where players want to be able to move.
-Testing will allow me to create UI that matches the gamefeel and to determine if I need to use new camera angles (different player height, top-down for editing, etc.)
4) Multiple modes with a main menu
-Holding down the cardboard button should bring up a main menu with three options: Editor mode, Free mode, and Save Scene. Editor mode is this prototype. Free mode would allow you to look at objects and move around the scene without being able to edit anything (which removes the highlight effect as well). Save scene would save the current layout of the room so that when the app is re-loaded, it accesses that layout.
0 notes
projectquhere ¡ 8 years ago
Photo
I’d like to point out that the images do NOT expand like I believed they did upon click. Only the first image does that.
I’m working on that. At the resolution my process blog includes photos, they are difficult to read. If I can’t make the photo expand when clicked upon, I will re-upload this post with larger photos.
Tumblr media
Bedsider at NYU hosted a Sex-Positive Game Jam on 11/11/17. The goal was to promote sex positivity through play, to normalize sexuality, and to consider the ways in which games traditionally view sex as a reward. The particular challenge of this game jam was to consider “contraception beyond the condom”. The event ran 13 hours from start to finish and had enough participants that seven groups were formed to create games from scratch.
My group decided that, while contraception is important, several forms of birth control are also useful as sexual protection - from undesired pregnancy but also from STI’s. Focusing on sexual protection instead of contraception moved away from the challenge, but it allowed more room to explore queer and anatomy-removed sexual concepts. Two of us (one group member and myself) had coding experience, so we set to work in Unity while the rest of the group worked on the presentation, art assets, and copy.
Below is a section of the mind map we created when coming up with ideas for this game.
Tumblr media
Premise
Talking about sex is hard - espcially when app-mediated. Navigating your partner’s comfort talking about sex is important for satisfying and safe sexual relationships. We captured that complexity by providing three main topics of conversation: STI’s, Protection Methods, and Sexual Interests. You, as the player, navigate talking to a computer-controlled partner by pressing buttons to respond to their talk points. The partner randomly generates one of four STI statuses (HIV undetectable, being treated for asymptomatic gonnorhea, oral herpes in remission, or no STI’s), one of four sexual interests (only oral, role play, power play, or toys), and one of five protection methods (never use protection, always use protection, mostly use external condoms, mostly use internal condoms, always use dental dams), and the player’s responses differ for each of those qualities. Some responses lead to further responses by the partner player, in which case you get to respond again - but ultimately, the conversation about the particular topic ends, and you get to discuss other topics.
Game Goals
The objective of the game is to have the best sex! Your responses to the partner will increase or decrease intimacy. Intimacy is reflected by the brightness of the background colors during gameplay. Talking about topics in the right order increases intimacy because you are keyed into your partner’s comfort. Agreeing with your partner or being supportive increases intimacy. Being rude or disagreeing with your partner decreases intimacy. 
Tension
We wanted players to feel pressure to talk about things naturally, but swiftly. The game thus took the form of “chicken”, where to players are on a path toward each other, and the player that pulls away first to avoid a crash loses. Instead of “losing”, pulling away means the player removed consent, and sex did not happen.
The game has two circles moving toward each other at a steady rate. Within a short period of time, the player has to discuss complex sexual topics, decide whether to respond based upon their own real-world interests or what they think the partner character would want them to say, and then wait for sex to happen. The player can gauge their compatibility with the computer-generated partner by the brightness of the background. This is shown below.
youtube
The choice of circles removes players from any sort of sexual identity or anatomy-based decisions. They are free to explore sexual interest or protection outside of gender or within their own identities. The cycling colors in the background represent the order of topics in which the partner wanted to discuss sex - talking in the right order would increase intimacy, but talking in the wrong order would decrease it. In a final build, the buttons for conversation would be color-coded to match the backgrounds as well. 
No matter the correctness of order, the partner would want to discuss their stance on each topic, which gives the player more chances to increase or decrease intimacy. This shift in intimacy is shown in the video by the background getting brighter or darker.
End-game
Ultimately, the game rewards players for playing it no matter what decisions were made. When the two dots crash in the center of the screen, an end-screen would appear with educational content and the option to play again.
An important part of the design of this game for us was to not shame the player for making unsafe decisions, or for making any decisions. If the computer-generated partner randomly generated undetectable HIV and never used protection, and you as the player were supportive and decided that you were comfortable having unprotected sex (increasing the intimacy), we wanted to allow that scenario to occur. In the end-screen, resources on HIV, how to be supportive for friends or partners with HIV in non-stigmatizing ways, and resources on sexual protection would appear. Similarly, if the partner generated no STI’s, always used dental dams, and was into power play… but you as the player never got to talking about power play before the timer ran out… the end-screen would discuss the importance of being tested regardless of the presence of symptoms and resources on dental dams. Additionally, the end-screen would tell the player that they didn’t discuss power play until the middle of sex, and while it went well, talking about sexual interests in advance allows you to find comfort zones, be more present during sex, and establish safe words.
Sex is not inevitable in real life, even with a single monogamous partner or with a regular or new hook up partner. As such, the computer-generated partner also randomly generates an intimacy threshold. If the intimacy drops below that threshold, the partner ends the game, and the end-screen explains that you weren’t a match - and that’s okay. If you said something to offend the other player (ridiculing their safety preference, making fun of their sexual interests, or shaming them for their STI or lack-thereof), the end-screen would present you with more polite ways to phrase sexual discussions, and it would still provide resources for the topics discussed before the partner called it off.
Similarly, at any point, the player can choose to remove consent and end the game early. If that happens, the end-game congratulates you for removing consent. Prior consent does not imply current consent, and people should not feel bad about rejecting sexual advances whether there is compatability or not with the potential partner.
Final result
We didn’t make a game, but we had a strong presentation filled with chicken-related puns, our design process, and our goals and desires for the full version of the game. We demoed the working code we had and did a live-action version of the conversation simulation portion of the game. 
Half our group had to leave early, and our game required a lot of text, formulae, and coding know-how that were hard to achieve with a smaller group. The portions of the game that were not completed were the generation of buttons with different text options and an end-game screen that generated content depending on your responses, but the background color cycling, player movement, and random partner characteristic all worked perfectly, and all the copy was written. 
Below are some examples of the way conversation was handled, moving from brainstorming to MonoDevelop (click on images to zoom in).
Tumblr media Tumblr media Tumblr media
Finally, how is this relevant to QuHere?
It’s tangential in that we wanted to create a game that did not privilege heterosexual activity over other forms of sexual activity that allowed users to explore different ways of discussion sexual issues in safe ways. Players are allowed to be rude, allowed to mess up, allowed to stick to their own personal beliefs, or allowed to role play with decisions they would never make in real life. In the end, the game does not shame players, but it provides resources to learn more about safer sex regardless of what they choose to do.
Creating a queer-friendly sex game itself is relevant to QuHere as both are different forms of digital experiences used to empower the often-marginalized queer community. Additionally, this game allows players to explore sexual safety, whereas QuHere allows players to explore non-sexual safety and ways to generate it in the real world (by considering the real world in new ways or by entering the virtual safe space of QuHere whenever necessary).
1 note ¡ View note
projectquhere ¡ 8 years ago
Photo
Tumblr media Tumblr media
My project started with an issue of scope. I have forced my own constraints by not having the technical skills or time to build the full experience I intended to make. Feedback from professors, peers, and family all were warm if not outright enthusiastic, but all encouraged me to limit the scope. I’ve been meeting with professors to discuss their VR workflow and to become more comfortable using c# and Unity to make a VR experience. Today, I met with my Game Development professor to talk about my senior project. I walked him through some of the theoretical groundwork of my project, explained how I’ve worked on a senior project in which I had few of the prerequisite technical skills in order to challenge myself and to do a project meaningful to my career interests, and showed him my process blog. 
Having been given the elevator pitch, supporting art and documentation, and the “then and now” models, my professor said my sense of scope seems very appropriate (which was a huge relief to hear), but for he and for me to better understand the exact things I needed to accomplish, he asked me to write out the systems within the experience.
Breaking down the experience into its necessary components and then breaking them down into their subcomponents allowed me to better understand how to take my ideas and make them concrete in Unity. Additionally, it allowed me to think of prototypes and optimization, of how to tell the whole story of my experience with as little as possible.
That brings up to the above pictures. In my prototype of my project, I want players to be able to modify a room with specific objects in specific places. If the player wants to change the objects, they can replace them with other objects that specifically exist in the same place. After today, I realized that was achievable by making both objects exist in the same space in the world in Unity, but only one gets to be “enabled” at any given time. The above photos demonstrate what the scene actually looks like (left) and what the scene truly is (right).
I then set up some basic first-person controls with mouse movement serving as “gaze”, and I created a basic system that prints a message to the console depending upon what the player sees. In below photos, the player looks at a tree, the bed, and some treasure. You can trace my movement throughout the scene by the green debug rays being cast in the scene on the right.
Tomorrow, I will be building out a UI so that, upon pressing a button while staring at an object, the player will be able to see the other options for objects that can exist in that space and then swap them out. Afterward, I will modify my code to work with VR, which should be fairly simple - head movement controls the camera, which already has the ray casting system implemented, and the only other possible interaction would be the “cardboard button” (which can only register being initially pressed, held down, or released).
Tumblr media Tumblr media Tumblr media
0 notes
projectquhere ¡ 8 years ago
Photo
Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media
This is a new model of the environment I intend to design, as advised by certain professors in consideration of the time and technical skill constraints of my senior project.
The new design involves a tile system in which specific objects can only exist in specific areas of the room. While this limits self-expression somewhat, it allows for a more rigid system that does not need to worry about physics, abstract concepts of space, and potentially spawning infinite amounts of objects.
When I was given the advice to use a tile-based system, my mind immediately went to my favorite classic video game: Pokemon Gold Version. In the game, the protagonist has a room in his mother’s house, and his mother continuously spends money you earn throughout the game on decorations. Whenever you visit home, you can model your room.
The amount of player choice is very limited. There are only a couple dolls, posters, and beds to choose from. Yet the customization options that exist allow players to take ownership of their room and mold it to their liking. A game series that allows for much more customization than I intend to provide but that uses a similar concept is Animal Crossing. The player can construct objects in a scene and place them in specific areas inside their house - additionally, the character can create and apply essentially any texture they want to their own clothing and to many objects.
My updated idea sits between the complexity of Pokemon and Animal Crossing. The player will be able to cycle through a collection of objects for specific parts of their room like in Pokemon, but they will also be able to choose from a certain set of textures for the floor and walls like in Animal Crossing. Additionally, lights and sounds will be controllable as well. These two settings (decorating and texture control) are under the “arrange room” and “ambience” settings in the main menu. The game will be 1st person, and in free mode, the player will be able to walk around the environment by rotating their head and holding the cardboard button to move forward. This allows them to see the room in all dimensions. Additionally, in “Use the Room”, the player will be able to access certain angles and positions unavailable in “Free Mode” (such as sitting in a chair or lying down in bed).
This simplified version should be doable with basic interactions with Google Cardboard and the Google VR for Unity Software Development Kit. 3D models for objects are being created, though placeholder objects in the same aesthetic may be used to test the mechanics. Future versions of the project could allow for players to create an avatar (similar to Animal Crossing) and place a mirror in the scene as well as replacing some of the experience’s 3D models with 3D models of their own to further customize the experience.
As another reference, the images below the cut showcase a house from Animal Crossing: New Leaf, as well as the Pokemon protagonist’s room on Gameboy Color and on Nintendo 64. These sources show how a low-polygon, simplistic decorating system can be used to create unique, personal environments. They also give an idea of whay the scene could look like with color.
Tumblr media Tumblr media Tumblr media
0 notes
projectquhere ¡ 8 years ago
Text
Compromised account and update
Somehow, I lost access to my account over the weekend. An error message appeared whenever I tried to sign into Tumblr or even go to any of my blogs from another device or incognito mode, though there was no error code and there was nothing in the FAQ’s or Customer Support referencing an issue like this. There was no way to progress beyond the error message to sign out of my account or to see my dashboard, and I could only change my password through attempting to sign in from incognito mode (which I did, and it didn’t help).
As such, I was unable to post my new model when I thought I would. I put out a support ticket as soon as I learned about the issue, and it was resolved this morning.
I will be adding my models for the updated experience today, and I’ve been creating some 3D models for the experience of which I will upload renders today and ongoing.
My survey is collecting responses, and I intend to have a rough prototype for testing by Wednesday evening. My original timeline had two user tests completed by the end of October, but the old timeline also did not account for issues with how quickly Unity and Google’s VR are developing. I feel confident in my ability to progress!
0 notes
projectquhere ¡ 8 years ago
Text
Updates: Research and Meetings
I attended two events critical to the development of my project over this week. Read them below the break. 
Firstly, I went to an event called “Queering the Virtual Experience: Challenging Heteronormativity in Virtual Spaces”, in which Neill Chua, a queer designer and graduate from the graduate program of my major described his process when designing his thesis. He built a virtual reality voguing simulator using Unity that is playable using an oculus rift and a keyboard. My biggest takeaway was the quotes and definitions Chua used to inform his design. 
When discussing what makes a space “queer”, Chua pulled from the theories of Aaron Betsky, who wrote Queer Space: Architecture and Same-Sex Desire. In his work, he writes “by its very nature, queer space is something that not built, only implied, and usually invisible. Queer spaces do not confidently establish a clear, ordered space for itself... it is altogether more ambivalent, open, self-critical... and ephemeral” (bolded for emphasis). Additionally, Chua described the ideas of Lauren Berlant and Michael Warner, whose work Sex in Public surrounds the concept of challenging heteronormativity.The goal of their research is to “describe... radical aspirations of queer culture building: not just a safe zone but queer sex, but the changed possibilities of identity, intelligibility, publics, culture, and sex that appear when the heterosexual couple is no longer the referent or the privileged example of sexual culture” (bolded for emphasis).  The former quote justifies the creation of a “queer” safe space - a space is queer because it is implied to be queer. This allows gay bars, gyms, clubs, and safe spaces to exist - they’re not necessarily “designed” to be gay, nor are the objects and structures that build those spaces particularly gay by nature. The inhabitants, use of, and energy of that space makes it gay. Those who exist in the space (and outside of the space) agree to the queerness of it. To me, this explains why many gay people sometimes get frustrated when parties of straight people enter a gay bar (especially if the straight people are rude or feel entitled to special treatment). While straight people can enter gay bars, there in a social contract implied upon entering that the inhabitants of the bar will respect the safety, autonomy, and expression of non-heterosexual, non-heteronormative concepts. Groups of heteronormative people, or especially cisgender people who feel entitled to special treatment or to gaze the “spectacle of the gays” destabilize the queerness invisibly imbued into the space. Because of this, I am more confident expressing that the safe spaces I create are meant for queer people. Many people have asked “what makes a VR space gay - can’t everyone just benefit from using your idea?”, and my response has been that it’s not necessarily a gay space, but a regular space made by and for gay people that I’m requesting that non-queer people don’t use. With Betsky’s definition in mind, I am more confident saying “It is gay because it is implied to be gay, just like most gay spaces are - by creating this space as someone who is queer, and designing it with the needs of various differently-queer people in mind, I’m implying that it is queer - and even though the quality is invisible, it deserves to be treated with respect and autonomy from non-queer people and its queer inhabitants alike.” The second quote explores the multitude of possibilities when society does not privilege heteronormativity, and it helps me understand how to frame my idea if asked how it is “child-friendly”. Yes, sexuality and gender are concepts that are linked to sex - but there are so many structures in the world that are also influenced by gender and sexuality, including identity as a whole, culture, public spaces and behavior, and forms of communication, social interaction, and even education. I would argue that queer needs for safety differ than heteronormative needs for safety - I could point to statistics about hate crimes as an example, but preliminary responses of a survey I created about safety show differences in the responses of straight/cis and non-straight and/or non-cis people [link to data analysis when I am comfortable with the number of responses]. If we privilege the heteronormative needs for safety and heteronormative safe spaces (or  architecture and design in general that privilege heteronormative lifestyles and structures), we lose the ability to better understand a wide range of ways of living, and we lose the ability to meet the basic human needs of safety and security for queer populations.
Secondly, I met with a professor on the floor in which my program is hosted who has experience developing games for VR within Unity. I showed him this blog (with models and theory), and I told him what I was struggling with the most (technical skills for VR in Unity). He gave great advice regarding achieving my goals with my limited technical skills within the short timeframe of the rest of the semester. 
To be specific, my original concept would involve moving objects accessible through an inventory around a scene to create a virtual space that created a sense of safety. I’ve been trying to tone down the scope and to pull from designers who consider a small space and limited interactions - experiences like Daily Life VR: Poop (only interaction is gaze and touch-based cartoon poop generation), Love Boat (a scene in which a user rides a love boat through a visual retelling of the creation of a cult), and Draw Me Close (an experience using both VR characters and “reactive actors” to give physical sensation to VR experiences - like being hugged or tucked into bed). So the concept I brought to this professor was a bit smaller scale - it would be a premade room, but with the ability to access an inventory to add onto it.
He gave the advice of using a tile-based system. I lit up because that’s perfect for what my project’s needs are. My prototype will be developed toward my needs/desires for safety, so I am 3D modeling totems that represent safety for me, but creating a premade room with specific tiles in which a specific, small range of items can go still allows users to be creative and have ownership over their space, but it avoids certain issues with rendering diegetic UI, file size, object manipulation at runtime, object manipulation in VR with a first person camera. It also limits the scope in a constructive way.
I will be posting a new model tomorrow, and I have a meeting with another professor with VR development experience to work on some technical skills to bring my vision to life.
0 notes
projectquhere ¡ 8 years ago
Photo
Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media
Update: A basic model
It makes sense to use a real-world space that provides me a sense of safety to create a mock-up of virtual reality interactions in a space that mimics the real world. In this case, I used my living room.
The model here is intentionally simple. Users select objects, have full control over their transform, and deselect to view the altered space.
The first VR prototype I will be testing will include a UI that comes off the selected object and stays in a specific location in 3D space (not following the user). This is what Unity calls “diegetic UI”, as opposed to “spatial UI”, which places the interface in space in relation to the user.
I want to test forms of UI that are intuitive and comfortable in VR. Providing large circles as interactable icons should make them easy to interact with via gaze. The intention is to make interaction easy and to limit the chance of nauseating players. Through testing, I will refine the shape and location of icons (potentially surrounding selected objects, potentially a set distance from the user, etc.)
0 notes
projectquhere ¡ 8 years ago
Photo
Tumblr media Tumblr media Tumblr media
Update: VR Progress and prototype player
Today, I met with my virtual reality professor from a past semester for guidance on using Google’s VR for Unity Software Development Kit. His advice: Don’t use it. Build the interactions myself first, then dig through the SDK to find code that is useful.
That made the hairs on my arms rigid with anxiety, but I know it’s good advice. I’m not just making a product, but a process. I’m a student of digital media, and I may as well learn how to build games and VR experiences the right way. It’s scary to hear that advice seven weeks into a semester, but I felt motivated to make progress.
And today, I finally did. After meeting with my professor, I searched documentation on Unity’s raycasting capabilities. Ultimately, I have a very basic raycasting interaction that is based upon the user’s gaze.
In the screenshots, the “Game View” is on the far left - this is what the player would see within my experience. The text is the console, printing out specific messages that only appear when the player is (or is not) looking at specific objects in the scene - a very basic, context-specific interaction. Additionally, the scene view below the console, which is of the same environment as the game view, includes debug rays cast every frame. 
When the user is looking directly at a part of the cube, the debug rays were green and the console printed that there was something in the way, and that it was a cube. Otherwise, the rays were red and the console said there wasn’t anything in the way.
I’m working on making the process run smoother and more reliably, but it is a major first start. I’ll be creating another prototype of this interaction with more types of models by the end of the week.
0 notes
projectquhere ¡ 8 years ago
Photo
Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media
Update: Unity Familiarization
My capstone project requires the use of Google’s VR for Unity Software Development Kit. Its built-in gaze-based interactions meet the needs of my project perfectly, and its code is naturally compatible with Google Cardboard interactions.
As previously stated, all tutorials I’ve found for the SDK are outdated - and if I was going to learn how to use the updated version properly, I would need a much deeper understanding of Unity.
So I started with the basics: Video game tutorials provided by Unity itself.
Unity’s first two tutorials include a simple ball-rolling/item-collection game in 3D and a top-down space shooter game. The first game generates text upon the completion of the entire game, alerting players that they have won the game. The second is potentially infinite, where the player racks up points for blasting asteroids out of the sky. Gameplay ends when the player collides with an asteroid, and the player is prompted to restart or exit. Both tutorials were also originally created with older versions of Unity than the one I am using. Unity’s libraries have updated, and therefore, some of the code no longer works. Completing these tutorials became just as much an exercise in finding answers to frustrating problems as it was about familiarizing myself with Unity.
I’m not a strong coder. I can certainly get by - I can follow tutorials, read documentation, make inferences, and ask good questions to get to the right answers. I’m even starting to develop more of an instinct while coding. But I’m not a coder and I’m not trying to be, hence my comfort using tutorials and open source to get the basic interactions I need to make my projects succeed.
With that said, I’m proud of my work on these two sample games. While working on my capstone, I’m also in a graduate-level game design studio, and it is just entering the coding unit (after foundations and ludology units). I hope to take what I learn from that class and from continued Unity tutorials to improve the efficiency of my code on this project.
0 notes
projectquhere ¡ 8 years ago
Text
Major Update and Week 7 To-Do
I have made major progress toward the completion of my project and getting back on my original timeline.
This post will be updated to turn each progress point into a hyperlink to a post specifically dedicated to the progress I made.
- I created a basic inventory system in Unity that I am using as a prototype of my VR Project’s inventory syste,
- I built two games using Unity’s tutorials to both familiarize myself with the engine’s UI and to become more comfortable scripting with c#
- I built models of my project to spatially inform my design
- I updated my resume and portfolio and found 20 jobs to which I would like to apply within the week
- I scheduled an office hours visit with my VR professor from a previous semester to better learn how to design for VR in Unity
Immediate goal: Meet for office hours regarding VR tomorrow
Within this week, I will be making individual posts for each of those points of progress. I also will be 3D modeling basic, low-poly assets for use within the inventory system. I know 3D modeling is not my greatest strength, but I will use free Unity Store low-poly assets that are of a similar aesthetic to user-test if need be.
Depending on my comfort with the VR design office hours, I will try to build a basic VR gaze-tracking system this week or push it off to a long-term goal. I want to keep up the productivity of Week 6 burning for the rest of the project, and I’ve taken steps to have more time to devote to this project - but until I have more time within the week, I want to focus on tasks within my grasp. VR interactions can be integrated later on without harming my project.
Long-term goals include user-testing and updating the inventory UI for VR - on both of which I hope to make significant progress before the end of October.
0 notes
projectquhere ¡ 8 years ago
Text
I posted this to my wrong blog several days ago. I’m writing in this format to include the timestamp from the first post!
Experiencing Requeerium
Requeerium was the major inspiration for my senior project, quHere. It was a mixed-media experience using motion capture technology that create an immersive, interactive environment in which users explored queer narratives.
At the exhibition I attended, Requeerium was set up in a large, square room with white walls. Projections cast the environment onto three of the walls, with the fourth serving as a glass door to enter and exit the environment. Peoples’ voices could be heard sharing their experiences with harm, love, and life as queer people. The background changed over time to match the themes of the recorded, spoken narratives, but the background was also manipulated by the environment’s inhabitant’s use of “totems”. The floor was littered with objects relevant to the spoken narratives that were taped with motion sensors, and as users of Requeerium moved while holding the totems, the scene reacted. Sounds, lights, and even entire 3D models warped, became more prominent, or totally disappeared depending on where the totems were. There were moments where the cameras could not read the totems’ sensors, but the technical issues were often short-lived.
I entered Requeerium with a friend, both of us possessing identities traditionally encompassed when using the word “queer”. It was an emotional experience for them, but I struggled to feel much emotion at all. I could only understand it as an interesting motion capture experience, but I did not consider it as very relevant to me. My friend told me that it was okay if I didn’t feel anything, but they were glad I was in the space with them because we, and the other users, were building the space without us. In their words, the space wouldn’t exist if not for my experiences as a queer person - it was a space for me.
I’m not sure what happened, but hearing that, I cried harder than I had in years. As cliche as it sounds, when I opened up to the intent of the experience, I embraced my queerness in a way I had never let myself before.
As someone who has lived through physical, sexual, and emotional trauma because of my queerness - and someone who has made treasured friendships and felt love because of my queerness - Requeerium very suddenly meant a lot to me. I felt flooded with emotion, and I wanted to try everything to explore the environment to its fullest. I wanted to use all the totems, and I became frustrated if the signal cut out while trying to use them. It was my space, and I wanted to know it in a rich, thorough way.
Ultimately, users were asked to leave the space to allow others to try it. I left feeling somewhat uncomfortable, but I felt energized and at peace. I was considering my needs for control over my self-expression and my queerness, and I was considering how my queerness has shaped my life and potentially other environments in which I’ve lived. This was significant because it meant I was considering my queerness at all outside of intimate relationships.
Requeerium has since been updated to a virtual reality experience to bring its stories and metaphor to a wider audience. While I have not yet experienced it in VR, I hope to have the chance to see it at a future exhibit and to work with one of the developers (a peer from my program at NYU) in the future.
I’ve taken more ownership of my queerness since entering Requeerium. I want to provide to others the sense of ownership over space that Requeerium offered to me, and I want to build an environment in which users can both feel pride in creating something and comfort in being in a space designed for their safety.
quHere is less focused on narrative than Requeerium is. It is not meant to be situated in physical space, and it is not meant to be a social experience. But without Requeerium, I would not have thought nearly as hard about my needs or my life as a queer person.
1 note ¡ View note
projectquhere ¡ 8 years ago
Text
Week 6 To-Do’s
Immediate:
-Upload post with Unity Game Tutorial screen recording to show work
-Finish Unity Inventory System tutorial 
-Finish Scrolling game tutorial (to familiarize with Unity UI)
-Email Games Professor to set up office hours to consult on basic interactions
Before Week 7:
-Make mock-ups of potential VR environments on paper and in Photoshop
-Refine the Inventory System to make sense for my project
-Write survey questions regarding safety, begin distribution
-Write interview questions to test the inventory system and VR interactions
-Make VR interactions with VR Professor at office hours (Next Week)
Ongoing:
-When the first version of all interactions actually work, build a testable prototype and begin testing
-Ask queer peers thoughts about safety
-Design low-poly 3D models
-Continue refining portfolio, resume, cover letter, and LinkedIn
-Update domain name of my portfolio
0 notes
projectquhere ¡ 8 years ago
Text
Disappointing Development Moment
I did not allocate enough time in my original timeline toward developing a set of interactions using Google Cardboard.
There are many tutorials available on YouTube for creating VR interactions compatible with Google Cardboard. Some I have found include head gesture recognition, gaze input and timed gaze input, and Cardboard Button input. Realizing Cardboard Button is available as a form of input is greatly beneficial to my project, as it both lowers the number of discrete gestures I need to code for forms of interaction with the VR environments and allows for people with limited head and neck mobility to build environments easier.
Thought I’ve watched those videos, I did not start developing my interactions at the same time. I’m new to Unity - my only experience with is has been a Game Design Studio class I’m currently taking. I wanted to become more familiar with Unity by watching tutorials before diving into development and getting stuck.
Unfortunately, all the tutorials I watched are outdated. Unity and GoogleVRForUnity have been updated to the point where even the most basic of VR Prefabs no longer exist. Through reading documentation on the matter, this is because Unity now handles VR natively, and GoogleVRForUnity has become more streamlined in response. I can only imagine those changes are great for experienced developers, but as somebody new to Unity, I am struggling.
But what does that mean for my project?
Nothing except that my VR interactions were not done on 10/04/17 as originally planned - they’re still not done. What I thought would be a matter of following tutorials and then “getting creative” has turned into me needing to devote a much larger amount of time toward understanding the basics of Unity and its VR capabilities than I anticipated.
Fortunately, I can work on the next task of my project simultaneously. Understanding Unity better should give me a better grasp of the vocabulary necessary to ask good questions and to navigate the program without tutorial assistance. This should allow me to be more comfortable developing VR interactions and make building an inventory system less complicated. 
I’m disappointed in my inability to meet my very first deadline on this project, but I believe it is for good reason. I have responsibilities other than this project (graduate-level game design studio, weekly physics quizzes, other homework, and a part-time job), and through using the time I originally devoted to building a prototype of VR interactions, I have realized I underestimated the activation energy required to get this project started.
TL;DR:
The VR interactions are coming, and expect documentation when they do!
0 notes
projectquhere ¡ 8 years ago
Text
Reflection of “Daily Life VR - Poop”
This post is a reflection of “Daily Life VR - Poop" (”Poop”, for the sake of brevity), an experience exploring companionship in traditionally solitary spaces. Below is a summary of the experience, my analysis of the experience, my frustrations with the experience, and finally questions using “Poop” prompted for my own VR development. Try Daily Life at: http://dailylifevr.com/
The creator of Daily Life, Laura Juo-Hsin Chen, develops VR experiences with low-tech approaches to bring VR to a wider audience. Her work can be found at: http://www.jhclaura.com/
"Poop” opens at a splash page that prompts the user to enter their name. Upon doing so, they enter a 3D-modeled bathroom with instructions for interacting with the scene written along the walls. A meditative voice explains the purpose of the experience before the camera shifts and users fall backward through a flesh-colored tube and land on top of a large toilet bowl under a starry sky surrounded by rolls of toilet paper. Looking directly up allows users to see a board with writing on it that details how many people are currently also using “Poop”, how many poops the user has produced, and how many people have ever entered the VR space. There is also an exit sign that allows the user to be pulled back into the starting bathroom environment.
The strength of this VR experience is its use of metaphor. “Poop” forced me to reconsider my private, solitary, and usually mundane routine of pooping within a social context. Sitting on the familiar and private object of a toilet bowl while in the exposed environment of a dark, starry sky forced me to consider my need for privacy and to process shame I feel for being watched while pooping. The experience looped until I chose to exit it, and in the second loop, I paid more attention to the audio. It took me on a guided meditation of pooping, feeling the sensation of movement within my body and ultimately the feeling of poop leaving my body (ending with a quiet splash and a sigh of relief from the narrating voice). Focusing on the narrator’s voice pulled me out of my own head and made me imagine that everyone else in the scene also was pooping and going through the same bodily feelings I was going through.
I knew this couldn’t be true - the first time I tried the experience, I wasn’t actually pooping, and there wasn’t anyone else in the VR scene. But the second loop, in which I interpreted the presence of others as a shared experience and not a vulnerable or shameful one, was far more productive than the first. I found I wished I could see the other users of the experience.
That ties into ways I was frustrated with “Poop”, and these are not faults of the designer’s.
Firstly, “Poop” is a social experience with assets dedicated to telling users how many people are pooping with them and how many people have used the space before them. “Poop” also debuted in May of 2016, so the frequency of its use has substantially lowered. Seeing the documentation video showed that users should be able to see avatars of other users in the space - I saw none. That was disappointing for me as a user, though as a designer, it makes me consider the sustainability and lifespan of my virtual reality spaces.
Secondly, “Poop” was a VR experience that required input of touch or arrow keys to move around and to interact with the environment. Google Cardboard allows for neither of those to happen, yet headsets that use a smartphone as the screen like Google Carboard are the only ones that allows users to enter this experience while pooping because of its lack of cords and reliance upon a computer. Because I used Google Cardboard, I could not “poop” (by touching the screen or clicking the left mouse button), I could not walk around (by holding the screen down or using the arrow keys), and I could not exit the scene. This made me consider how I could create meaningful and complex interactions while relying upon limited input in Google Cardboard.
Thirdly, despite being incredibly low-poly, the environment ran much slower on my phone than it did on my laptop. This made pooping (by taking off my headset and tapping my phone) unrealistically slow. It took nearly a full minute for the cartoon poops to fall from the origin of the camera to the bottom of the bowl, something that took only a second while testing “Poop” on my laptop later.
I enjoyed the experience and its aesthetic. It reminded me that scenes I build do not need to be complex or totally filled with 3D assets for users to understand the space or to find meaning in the space. Trying the experience prompted three major questions for the development of my senior project, quHere:
- How do I improve the longevity of quHere? How can I make it useful to users even when its frequency of use lowers over time?
- What forms of interaction feel the best and make the most sense while using Google Cardboard?
- What is the right balance of fidelity and performance to make an environment that runs smoothly on any smartphone?
0 notes
projectquhere ¡ 8 years ago
Text
To Do and Reflection: Week 4
Week 3 into Week 4 was not as fruitful as I would have hoped. I distributed my time too heavily into my presentation and a conference in which I both presented and attended, which were good uses of my time, but I did not get as much exploratory done as I did in the first two weeks of class, nor did I research as much as I had hoped. Between Weeks 3 and 4, I tried out a relevant VR project and I found the documentation for a mixed reality experience I attended a while back that inspired the direction for my capstone project.
I will be posting today and tomorrow about my experiences entering those environments.
Immediately:
-Write about Daily Life VR
-Consolidate notes about Requeerium
-Create a full playlist of videos for VR interaction prototyping
By Wednesday, 10/4:
-Have a testable VR head-based interaction prototype
-Research Playtest Thursday
Ongoing:
Test more VR products
Learn about research resources (help setting up focus groups, surveys, etc.)
Go to The Center more often
Engage non-student friends who identify as queer in my academic life
0 notes
projectquhere ¡ 8 years ago
Text
To do: Week 3
Immediately:
- Make outline for Senior Project Proposal
- Review links to VR creators’ work given by my peers and professor
- Clear storage on laptop and phone
Before Wednesday:
- Another VR head tracking YouTube tutorial
- Write analysis of at least two of the VR works being immediately reviewed
- Finalize Senior Project Proposal presentation
Ongoing:
- Download Google developer kit for VR
- Try to make oSTEM meetings
0 notes