I lost track of that space a year or so ago ... the ladt thing I remember was John Carmack's involvement, and that Andreessen Horowitz and @cdixon were diving in head first. I never saw who won out between the Rift and the Leap guys, but I'm assuming from your reply it's Oculus
My homebrewing adventure is a tech sabatical of sorts, you could say. Very recently though, I took five or six months from brewing, and have been picking up Elixir and catching up on Docker. And I'm literally f**king with AWS Lambda right now. Hoping I can balance all the things: lots of work, brewing, coding, and Danielle and the dogs. Oh yeah, all while getting healthier despite all of those hours at a computer =( ....... sure!
I thought OpenStack was a clusterf**k of governance hell these says, but like I said, sabatical. I've done a good job of losing track of the edge. I became obsessed with staying current and wasted all of my limited time doing so =|
It's been good, too, because I realized I didn't need all of that for solopreneur projects. Now I'm focusing on tooling for that kind of thing, only, so it's more about static, Serverless, WebRTC, WebGL, JS ecommerce, and some IoT just to have something tangible ...
It used to be but many major companies (and customers of mine) are moving forward. We have several customers who have 1200 compute node deployments of Openstack. We fix the last part that does not scale, plus give netadmins the tools to troublehsoot virtual networks. Openstack is accelerating this year, especially running things like Cloud Foundry or Kubernetes on top of openstack.
We also do some hybrid VPC peering to AWS on the networking side so customer can have workloads in AWS, locally in OST or in remote azure cloud, but the tenant network spans all 3 sites. And lastly we do VTEP termination to TOR switches, that we can control via OVSDB, so customers can bridgein physical heavy iron (like video demuxing boxes) into a virtual environment so they are all on the same L2 broadcast domain (the hardware doesnt know that the virtual is on a different rack) and visa ve...
Sorry grant OST = Openstack, OVSDB = OpenVswtich DataBase. OVSDB runs on new top of rack switches, anything using the new Trident II or Tomahawk (40G and 100G respectivly) chipsets. We also work closely with all linux switch OS vendors like Cumulus to do a lot of cool stuff, changing private datacenters and driving costs significantly down. We are also doing work for the OpenCompute project (think facebook).
So OpenFlow type stuff ... I'm only the slightest bit familiar with networking at that scale NOW. When I followed that space more closely, it was called grid-computing - so I'm more familiar with like gnutella et al. I guess the newfangled SDN stuff is improved for not being tree-like, right? I know they tout it as being superior for mitigating a lot of the modern attack vectors (DDoS etc).
OpenFlow is garbage We fix all the issues with openflow and abstract the data plane from the control plane. And yep thats part of it. Its also easier to make redundant networks, HA, along with a LOT of other neat stuff that helps people scale. Plus, as the virt nets are separate from the physical network, you can use easier CLOS type networks in a data center so there is less complexity and less attack vectors for the physical network. We do full distributed load balance, metadata, dhc...
I know OF that stuff, but have no practical experience setting it up - or a need for it for anything I've worked on. The degree of datacenter redundancy that my use case warrants is a checkbox on linode (or just using AWS). Nothing mission critical in my plans, purposefully. I've stripped down my future plans to only include solopreneurial type apps that I can build AND ADMIN myself. KISS.