Cybernetic Psychology: An Explanation of Human Communication Through the Eyes of an Engineer

Me, I’m an engineer. Mainly in computers, and then mainly in 3D graphics, mainly Virtual Reality. She, non-native English-speaking psychologist. So we have cultural as well as linguistic differences. When we talk, it’s not always successful, so we offer each other advice on ways to improve this communication.

It struck me that it is much like the Internet. There are two basic protocols when machines exchange information: TCP/IP and UDP. In the former, when data is sent, the receiver checks that the information came intact, and informs the sender, who will then send the next packet of data, or resend the previous one. In the latter, the sending device just sends what it can, not worrying about if the data got through intact. The receiver does its best to interpret it. This is used for audio and video streaming, where speed is important, and we humans can tolerate glitches. TCP/IP is more ‘secure’ but is slower, because the two devices take the time to ensure the message is heard completely.

If there is static in the line, if there is bad weather or a loose wire or just old devices starting to go bad, communication is poor. The operators can then decide to just forget it, or fix the problem. Perhaps a new cable is needed, or some contacts need to be cleaned. Repairing these connections helps to ensure that the message gets through, not only faster, but with better integrity.

Now there may be cases where perfect communication is needed, and times when not. So the tolerance for loss of data may depend on the content of the message. If the message is, say ‘nice day, huh?’, then that might use a UDP-style protocol. The general message is gotten even if a little of that is lost, but even if the entire message is missed, it’s not a game-changer. If the message is ‘I’m your doctor, and you have a specific disease and need to take a specific drug to get healthy’, then, yeah, the TCP/IP protocol is called for. If any of that is misunderstood, it would be bad. Finding the balance between which messages can tolerate loss and which should not is the real black-art, and perhaps the subject of a future post.

So when you are talking with anyone, consider your protocol. Especially when talking to people who are very important to you. Are your messages getting through? Do your lines of communication need some repair? Ask. Getting these lines in better shape will allow for more efficient communication with less need for error-correction.

My First X3Dom Shader – Navigating the minefield

As webGL platforms go, X3Dom is great. The latest incarnation of VRML and X3D, based on an ISO standard, durable, community supported, and championed by Fraunhofer, awesome. It still largely has that ‘VRML’ 90’s look, because of the standard lighting model. Fraunhofer has made great progress in putting forth the ‘Common Surface Shader’, and allows for any type of shading with the ‘Composed Shader’.

I’m making an X3D/X3Dom exporter for Unity, and that’s ‘shader heavy’. I figured I’d support most common stuff, and the ‘Common Surface Shader‘ handles a lot of that. For the rest, shaders need to be translated. That’s where the ‘Composed Shader‘ comes in. So I thought I’d start with a Terrain shader, because a) it’s a simple shader, just 4 textures plus 1 more to blend them, and b) terrain is a big flashy fun visual, so a little effort (in theory) for a lot of splash. Or, maybe just 2. And a fixed blend weight. Easy, right?

So of course I first turn to the documentation. God bless everyone who has put up any sort of documentation and examples and tutorials! Don’t get me wrong, these are good things. Except, in my case, I just needed a basic, simple texture blend shader to get started. Not a fun ‘gooch’ shader (no textures). The teapot example is pretty good, has more than I needed, but this is where I found the most gold.

I’ll cut to the chase. Here’s the simplest thing I could get working. Then vertex shader does almost nothing (as it shouldn’t) and the fragment shader just draws a texture blended with another one, nothing fancy. It may not be the best implementation, please let me know! I want to make it simple with a couple of textures:

<appearance>
 <Material diffuseColor=".7 .7 .7" specularColor=".5 .5 .5" ></Material>
 <MultiTexture>
   <ImageTexture url='"green.png"' ></ImageTexture>
   <ImageTexture url='"rocks.png"' ></ImageTexture>
 </MultiTexture>
 <MultiTextureTransform>
   <TextureTransform scale='.1 .1' translation='.25 .33'></TextureTransform>
   <TextureTransform scale='10 10' translation='.25 .33'></TextureTransform>
 </MultiTextureTransform>
 <ComposedShader>
   <field id='Picture0' name='Picture0' type='SFInt32' value='0' accessType='inputOutput'></field>
   <field id='Picture1' name='Picture1' type='SFInt32' value='1' accessType='inputOutput'></field>
<ShaderPart type='VERTEX'>
 attribute vec3 position; // DO I EVEN NEED A VERTEX SHADER? I DON'T WANT TO ALTER VERTICES
 attribute vec3 normal;
 attribute vec2 texcoord;
 uniform mat4 modelViewMatrix;
 uniform mat4 projectionMatrix;
 uniform vec4 view_position;
 varying vec2 fragTexCoord;
 void main(void){
  vec4 Pos = vec4(position.x, position.y, -position.z, 1.0); // USING THIS CUZ IT WORKS, NOT SURE IF REALLY NEEDED
  fragTexCoord = vec2(texcoord.x, 1.0 - texcoord.y);
  vec4 mvPosition = modelViewMatrix * Pos;
  gl_Position = projectionMatrix * mvPosition;
 }
 </ShaderPart>
 <ShaderPart type='FRAGMENT'>
#ifdef GL_ES
  precision highp float;
#endif
  uniform sampler2D Picture0;
  uniform sampler2D Picture1;
  varying vec2 fragTexCoord;
  void main(void){
   vec4 texCol0 = texture2D(Picture0, fragTexCoord);
   vec4 texCol1 = texture2D(Picture1, fragTexCoord);
   gl_FragColor = texCol0 * .5 + texCol1 * .5; // Half this, half that
  }
  </ShaderPart>
 </ComposedShader>
</appearance>

So why was that so hard?

1. x3dom.js is not tolerant of fools, but doesn’t say why.  For example, I was getting this error:

Error: WebGL: A texture is going to be rendered as if it were black, as per the OpenGL ES 2.0.24 spec section 3.8.2, because it is a 2D texture, with a minification filter requiring a mipmap, and is not mipmap complete (as defined in section 3.7.10). x3dom-full.debug.js:3303

And no explanation of why. I checked the texture files, they are power-of-two and well behaved and work in other examples, why not mine?

Another example is that output from, for example, Vivaty Studio, has urls formatted technically correctly, but in such a way that x3dom will choke.

2. The documentation is incomplete. I know it’s not the top priority and it’s being done for the public good, not so much profit, but hey. I have no idea what is, and is not, needed, from the examples. I don’t need to futz with vertices, so do I even need that part? If so why? And what should it look like? What are attributes, uniforms, varying, and what exactly do they mean, and which part(s) are they valid in, and do they have to have a certain name or naming convention(s)?

3. X3Dom is buggy. ShaderPart url appears to not work.

So these are landmines. Here’s how to avoid them:

1. Make sure you are either using HTML conventions or XHTML conventions, not mixing them.

It would be nice if the x3dom parser would either tolerate this, or warn about it to let you know. But it’s simple to get into trouble using copy/paste from other sources, and get into trouble.

The first problem was caused by this:

<ImageTexture url='”example.png”‘/>

That seems pretty correct, no? No! Because the rest of the page uses explicit end tags. Changing it to

<ImageTexture url='”example.png”‘></ImageTexture>

fixed the problem. No problem with the texture, as the error code says. Problem with the X3DOM syntax.

Next, Vivaty Studio indents urls like this:

<ImageTexture url = ‘
“example.png”
“example2.png
‘/>

Now, that white space is legal. But a one-liner in x3dom.js will fix that, just trim the url. Otherwise, the whitespace will be included in the url, leading to a bad url, and no texture load. And note the lack of end-tag. So be careful twice here.

2. Make sure you have exactly the same names listed in the fields and shader parts. And it seems there must be a Vertex shader, so if you have a ‘varying’ there, make sure it is also in your Fragment shader.

3. I had no luck using ‘ attribute vec2 texcoord;’ in the Fragment shader, so I put it in the Vertex shader and send it out as ‘varying vec2 fragTexCoord;’ from there, note that has to be included in the Fragment shader as such. And I don’t even know if this is correct, maybe I did something wrong and it’s not really needed, please correct me if I’m wrong.

4. MultiTextureTransforms are not yet implemented in X3Dom, so don’t rely on it. You can pass the scale and translation into your shader directly for the time being.

5. Unfortunately the ShaderPart url attribute appears to be broken. It will read the external file but it doesn’t use it properly. So you will have to ‘inline’ your shader code. Not ‘Inline’ like the node, ‘inline’ like paste the code in the file.

Problem with X3D/X3Dom spec

So based on XML, X3D/X3Dom should not have any dependencies on the order in which elements are listed within a container. But the MultiTexture (and associated coordinate and transforms) DO depend on order. This is bad. Fraunhofer seems to have begun to address this with the ‘value’ field on the Fields, but this is incomplete..

I suggest that a new attribute ‘index’ be allowed on ImageTexture and associated coordinate and transforms, to disambiguate this.

<MultiTexture>
<ImageTexture index = ‘1’ url = ‘”funnyquotes.png”‘/>
<ImageTexture index = ‘0’ url = ‘”lovethat.png”‘/>
</MultiTexture>

<MultiTextureTransform>
<TextureTransform index=’0′ scale=’1 2’/>
<TextureTransform index=’0′ scale=’.5 .2’/>
</MultiTextureTransform>

See where I’m going with that? Then the parser can match up these items by index. And they can be passed to the shader as such as well.

VR for the Blind?

With all the hoopla over the Oculus Rift and many other HMD VR displays, it’s easy to forget the other entire half of the VR experience: audio. Oculus seeks to whip this notion into VR devs with the Crescent Bay which has headphones. But hey kids, let’s not wait, mmmkay? I’ve always wanted to make VR experiences so pleasant, you want to close your eyes and just listen. Any VR content makers working on relaxing scenes, I’m talking to you!

So here’s my first attempt at an all-audio experience, HMD is optional. The Aviary. Just put on headphones and listen. Now, it DOES help if you are wearing the Oculus (DK2 for the moment), because then you get position and rotational cues. And AND! Guess what? You interact with the scene’s simple UI by nodding your head ‘yes’ to questions. That’s it. Made with Unity and 3DCeption by Two Big Ears.

This is pretty raw, just got it working at all, lots of room for improvement. The questions can/should be audio too. Will be. But give it a try. Mobile coming soon, my Cardboard and Durovis minions!

Please sign up for the VR Audio meetup at http://meetup.com/VR-Audio

Get ‘The Aviary’ HERE

Fun and victory at VR Hackathon

So the first SF VR Hackathon was held this past weekend, it was historic! Tons of folks, many teams, so much innovation! Check the website for details, but from my perspective: YAY! My team won 1st Place in the Medical category for ‘Starwalker’ – a platform for physical and mental exercise. Using Unity, Oculus, and Leap, the person is tasked with reaching for targets in the space around them. These targets may be moving or not, but in any case are informed by the therapist/doctor as to the range of motion, duration, repetition, etc. needed for that session. And the targets may be ordered into puzzles or games for mental exercise. Success and failure, time of completion etc. can be tracked. High-fives to Chris Peri, Hunter Whitney, David Yue, Logan and Jeff Rosenburg for their long hours helping put this together, Peter Simpson for great ideas, and Mike Aratow for inspiration.

Our demo scenario for the Hackathon was a ‘space walk’ where the user  is in orbit around Earth at the International Space Station, and needs to access some parts to construct some Thing. To access the parts, a simple game (think Simon) is presented, and the user must push these buttons on the doors to get them to open. We got that much working. Chris Peri had a second phase just about ready but we didn’t get it integrated in time, which tasks the user with grabbing parts and assembling a truss. The idea being that not only can they assemble this truss, but use it to pull themselves around in the scene (novel navigation). Anyway, we were all pretty surprised to win as 3rd and 2nd place were also pretty awesome!

Probably the most amazing thing to me was a shader by Chris Berke. Two triangles in the scene. It was a living world with a day/night cycle and thousands of beings scurrying around on the gorgeous terrain. Amazing!

I was to burnt out to sample everyone’s demos, and I missed the Ghost Busters game that was Best of Show, but looking forward to seeing that on Oculus Share!

Props to Leap and the rest of the sponsors! Praise especially unto Mike Aratow and Damon Hernandez and the others who helped for organizing this amazing event!

Oculus Rifts On A Plane

The Challenge: Can a crowded airplane be made tolerable using VR?

So I loaded up my backpack and headed out. I assumed that airport security would give me crap about this strange stuff, but I guess the ones at SFO are either savvy enough or just don’t care. Probably the former. No problem.

Lesson 1: Don’t sweat it.

Once on the plane, one must wait for take off and ‘ok to move about’ before whipping out the equipment.

Lesson 2: Use a completely portable HMD, such as Durovis Dive, Google Cardboard, or Gameface.

Ok, so what shall it be? I’m thinking the flight is pretty stable so I’ll go with a stable scene – Tuscany why not.

 OnPlane1sTuscanySOnPlane2s

Lesson 3: Yeah, this works!

 

Ok, so now there’s some turbulence. Maybe an earthquake scene. Or a roller coaster.

Lesson 4: NO ROLLER COASTERS!

SamL

“I’m sick of the motherf*n’ roller coasters on this motherf*n’ plane”

Of course, to get some record of this, it’s either ‘selfie’s or ask the people near you. Who of course, are looking at you like they should probably be pushing the ‘call’ button anyway.

Lesson 5: Be nice, share.

So the two guys from Italy, who were playing some Kindle-based game, were very interested. I showed them a few demos, they loved it.

Yes Virginia, there is a market for VR on a plane. I’m thinking especially for those who fear flying and/or tight space.

Lesson 6: Don’t try to eat with VR on.

But you already knew this.

 

 

A simple boy’s dream

Sometimes people ask me why I got into VR, or 3D, or even computers for that matter.

It really goes back to my childhood.

I wanted to create things. I wanted to fly. I wanted to know more. I wanted to visit everywhere – in all of Earth and outer space, and at all scales from quantum to intergalactic. Somehow I knew, from the start, that computers – especially graphics and audio (and later networking) – would avail me these possibilities.

A simple boy with a simple dream: to have the power of God.  Is that so much to ask? Well, low-cost high-power tech has helped me attain these powers.

I won’t bore you with the details and the history. Suffice to say, we now have the computing power/size ratio for practical VR. We have software that will let you create anything (omnipotence), to know everything (via the Internet) (omniscience), and resources to put it all into a Metaverse, and use telepresence (omnipresence).  We can (re)create anything we can imagine, and/or anything that was or might be at any time (modulo the accuracy of actual time travel, stay tuned!).

Any universe I can imagine is at my finger tips now. Interacting with the machine, it comes to life.

I never really thought about it being for the wealthy or gifted. I’m neither. I just wanted it, that’s all. And with tech today, anyone can have it. Empowering? It is the very essence thereof.

Welcome to Heaven, one and all.

The (near) future of VR

So I was at SFVR Meetup Thursday, always an amazing group. There was a demo of James Blaha of ‘Diplopia’ which can help people with ‘lazy eye’, and Leap Motion’s new ‘skeletal tracking’ SDK, which is cool. But the star of the show, for me, was again GameFace Labs, with their latest Android-based wireless HMD with 1440 lines of resolution. Note that’s almost 50% more lines than the current Oculus Rift DK2 (coming in July), which is also not wireless. I’m blown away by GameFace Labs and looking very much forward to working with that hardware!

gameface

A work in progress

WhatDreams

Facebook Presence

FWIW I now have a Facebook page for all this too, in case that’s more convenient for you: https://www.facebook.com/thatvrguy

An Oculus-ready Unity Web Player Scene Loader

I’ve been working on Euman’s  Playing Mondo game platform for a while. In a nutshell, you can design your own game, which can be augmented-reality enabled for playing in the real-world too. And it can load arbitrary scenes on-the-fly which sync to maps and other players, all very real-time collaborative cool.

Enter the Oculus Rift. Now I know, the Unity web player isn’t supposed to work with DLL’s and Oculus runs from a DLL and so on. Well I tricked it. So this is still a work in progress, but coming right along. This snap is from the test staging server, not using actual public-facing content (which is much nicer!). But you get the idea. In practice it would of course go full-screen for maximum VR goodness.

WebOculus

 

 

#vr #euman #playingmondo #game #oculusv#oculusrift #unity #x3d