opensource.google.com

Menu

Posts from July 2016

Stories from Google Code-in: FOSSASIA and Haiku

Friday, July 29, 2016

Google Code-in is our annual contest to help pre-university students gain real-world computer science experience by taking on tasks of varying difficulty levels with the help of volunteer mentors. These tasks are created by open source projects so while learning, the students are contributing to the software many of us use on a daily basis.

The finalists and winners for our 2015/2016 season were announced in February and, in June, the grand prize winners joined us for four days of learning and celebration. Students and their guardians came from all around the world. One of my favorite things, as one of the Googler hosts, was seeing the light bulbs go on above parents’ heads as they came to understand open source and why it’s so important. These parents and guardians were even more proud of the students as they learned how much their teenager has contributed to the world through participating in Google Code-in.

We’ve invited contest winners and organizations to write about their experience and will be sharing their stories in a series of blog posts. This marks the first post in the series.

Google Code-in 2015 Grand Prize Winners and Mentors

Let’s start with Jason Wong, a student from the US who worked with FOSSASIA. FOSSASIA supports open source developers in Asia through events and coding programs.

Jason got into computer science during middle school at a summer camp where he built a website describing the differences between Linux, OS X, and Windows.  He dove deeper into web development by learning PHP and JavaScript through YouTube videos. He enjoyed being able to build more complex and dynamic websites. Like many new developers, Jason became very confident but did not concern himself with important aspects of programming like testing.

He learned about Google Code-in when Stephanie Taylor, fellow open source program manager who manages the GCI program here at Google, gave a talk at his school. Jason dove right in picking FOSSASIA as the project he would contribute to.

FOSSASIA offered Jason a chance to learn a lot about development and open source. He worked on their event pages, integrated Loklak and added an RSS section to their website, gaining experience with version control, Docker, Pharo and Node.js in the process. Most importantly, Jason learned about collaboration. He had this to say:

“Collaboration is so important in the open source community as it allows everyone to come together to help the world. Google Code-in has persuaded me to contribute to open source in the future.”

Next up we have Hannah Pan, another US student. She chose to work on Haiku, an open source operating system built for personal computers, because it used the C/C++ language which she was already confident with.

Hannah got into computer science through a high school AP course and discovered Google Code-in through this blog (woohoo!). She decided to participate even though it had already been underway for two weeks. Aiming just to make the top 10 in order to have a chance at being a finalist (and earn a hoodie), Hannah finished as a grand prize winner! 

The learning curve was steep: *nix commands, build tools and GitHub all presented new challenges. She was surprised how much code she had to sift through sometimes just to isolate the cause of minor bugs.

Like all of the participants, Hannah found her mentors to be crucial in providing both technical guidance and moral support. She explained, “I was amazed at my mentors’ expertise, dedication, modesty, and high standards. They taught me to strive for excellence rather than settle for mediocrity.”

Among other things, Hannah added localization support to the Tipster app, fixed extractDebugInfo, and even wrote a how-to article relating to the work. Reflecting on her experience, Hannah wrote:

“On the technical side, not only have I learned a lot, but I have realized how much more I have yet to learn. In addition, it has taught me some important life skills that no doubt will benefit me in my future endeavors. I’d like to thank my mentors and other students who inspired me and pushed me to do my best.”

Thank you to Jason and Hannah both for contributing to open source and sharing their Google Code-in experiences with us. Stay tuned as we continue this series in our next blog post!

By Josh Simmons, Open Source Programs Office

Omnitone: Spatial audio on the web

Monday, July 25, 2016


Spatial audio is a key element for an immersive virtual reality (VR) experience. By bringing spatial audio to the web, the browser can be transformed into a complete VR media player with incredible reach and engagement. That’s why the Chrome WebAudio team has created and is releasing the Omnitone project, an open source spatial audio renderer with the cross-browser support.

Our challenge was to introduce the audio spatialization technique called ambisonics so the user can hear the full-sphere surround sound on the browser. In order to achieve this, we implemented the ambisonic decoding with binaural rendering using web technology. There are several paths for introducing a new feature into the web platform, but we chose to use only the Web Audio API. In doing so, we can reach a larger audience with this cross-browser technology, and we can also avoid the lengthy standardization process for introducing a new Web Audio component. This is possible because the Web Audio API provides all the necessary building blocks for this audio spatialization technique.



Omnitone Audio Processing Diagram

The AmbiX format recording, which is the target of the Omnitone decoder, contains 4 channels of audio that are encoded using ambisonics, which can then be decoded into an arbitrary speaker setup. Instead of the actual speaker array, Omnitone uses 8 virtual speakers based on an the head-related transfer function (HRTF) convolution to render the final audio stream binaurally. This binaurally-rendered audio can convey a sense of space when it is heard through headphones.

The beauty of this mechanism lies in the sound-field rotation applied to the incoming spatial audio stream. The orientation sensor of a VR headset or a smartphone can be linked to Omnitone’s decoder to seamlessly rotate the entire sound field. The rest of the spatialization process will be handled automatically by Omnitone. A live demo can be found at the project landing page.

Throughout the project, we worked closely with the Google VR team for their VR audio expertise. Not only was their knowledge on the spatial audio a tremendous help for the project, but the collaboration also ensured identical audio spatialization across all of Google’s VR applications - both on the web and Android (e.g. Google VR SDK, YouTube Android app). The Spatial Media Specification and HRTF sets are great examples of the Google VR team’s efforts, and Omnitone is built on top of this specification and HRTF sets.

With emerging web-based VR projects like WebVR, Omnitone’s audio spatialization can play a critical role in a more immersive VR experience on the web. Web-based VR applications will also benefit from high-quality streaming spatial audio, as the Chrome Media team has recently added FOA compression to the open source audio codec Opus. More exciting things like VR view integration, higher-order ambisonics and mobile web support will also be coming soon to Omnitone.

We look forward to seeing what people do with Omnitone now that it's open source. Feel free to reach out to us or leave a comment with your thoughts and feedback on the issue tracker on GitHub.

By Hongchan Choi and Raymond Toy, Chrome Team

Due to the incomplete implementation of multichannel audio decoding on various browsers, Omnitone does not support mobile web at the time of writing.

Kubernetes 1.3 is here!

Thursday, July 21, 2016

With all of the excitement being generated around the Kubernetes 1.3 release and the first anniversary of Kubernetes 1.0 (#k8sbday), now is a great time to point out some of the features that enterprise users should be taking note of.

If you’re not familiar with Kubernetes, let me get you up to speed.

Kubernetes is an open-source container automation framework that builds upon 15 years of experience of running production workloads at Google. Once you declare a desired state, Kubernetes works to drive your system toward that state. As a developer this means less time handling trivial tasks that a computer can automate and more time focusing on developing applications that provide value to users.

Additionally, Kubernetes aims to be a framework that you can operate at planetary scale, run anywhere, and never outgrow.

With the release of Kubernetes 1.3, Kubernetes is closer than ever to meeting those goals; the 1.3 release adds exciting features such as:
Aside from features, the coolest part about working with Kubernetes is hearing user stories. I’ll soon be publishing an interview with Joseph Jacks, co-founder of Kismatic, the enterprise Kubernetes company, on the Kubernetes blog.

Joseph is very active in the Kubernetes community and has extensive experience with Kubernetes in production. In the interview I ask him why he bet his business on Kubernetes, what could be better, and how he sees Kubernetes growing in the near future.

Kubernetes has many, many features to offer that I didn’t get to cover in this short write-up. If you know anyone that needs to ramp up on Kubernetes, the easiest way is the free course I created with Kelsey Hightower, Scalable Microservices with Kubernetes. The course covers the basic features of Kubernetes. If you want an overview of what’s new in Kubernetes 1.3, feel free to look at the “What’s new in Kubernetes 1.3” video or slides.

Finally for a more in-depth look at the 1.3 release, make sure to check out: 5 days of Kubernetes 1.3 blog series.

Want to learn more about container orchestration and cloud native platforms? Here’s some recommended reading to follow up with:
By Carter Morgan, Developer Programs Engineer

Announcing an Open Source ADC board for BeagleBone

Wednesday, July 20, 2016

Cross posted on the Google Research Blog
Working with electronics, we often find ourselves soldering up a half baked electronic circuit to detect some sort of signal. For example, last year we wanted to measure the strength of a carrier. We started with traditional analog circuits — amplifier, filter, envelope detector, threshold. You can see some of our prototypes in the image below; they get pretty messy.


While there's a certain satisfaction in taming a signal using the physical properties of capacitors, coils of wire and transistors, it's usually easier to digitize the signal with an Analog to Digital Converter (ADC) and manage it with Digital Signal Processing (DSP) instead of electronic parts. Tweaking software doesn't require a soldering iron, and lets us modify signals in ways that would require impossible analog circuits.


There are several standard solutions for digitizing a signal: connect a laptop to an oscilloscope or Data Acquisition System (DAQ) via USB or Ethernet, or use the onboard ADCs of a maker board like an Arduino. The former are sensitive and accurate, but also big and power hungry. The latter are cheap and tiny, but slower and have enough RAM for only milliseconds worth of high speed sample data.  


That led us to investigate single board computers like the BeagleBone and Raspberry Pi, which are small and cheap like an Arduino, but have specs like a smartphone.  And crucially, the BeagleBone's system-on-a-chip (SoC) combines a beefy ARMv7 CPU with two smaller Programmable Realtime Units (PRUs) that have access to all 512MB of system RAM.  This lets us dedicate the PRUs to the time-sensitive and repetitive task of reading each sample out of an external ADC, while the main CPU lets us use the data with the GNU/Linux tools we're used to.


The result is an open source BeagleBone cape we've named PRUDAQ.  It's built around the Analog Devices AD9201 ADC, which samples two inputs simultaneously at up to 20 megasamples per second, per channel.  Simultaneous sampling and high sample rates make it useful for software-defined radio (SDR) and scientific applications where a built-in ADC isn't quite up to the task.  


Our open source electrical design and sample code are available on GitHub, and GroupGets has boards ready to ship for $79.  We also were fortunate to have help from Google intern Kumar Abhishek. He added support for PRUDAQ to his Google Summer of Code project BeagleLogic that performs much better than our sample code.


We started PRUDAQ for our own needs, but quickly realized that others might also find it useful. We're excited to get your feedback through the email list.  Tell us what can be done with inexpensive fast ADCs paired with inexpensive fast CPUs!

Posted by Jason Holt, Software Engineer

Lessons from Professors' Open Source Software Experience (POSSE) 2016

Wednesday, July 6, 2016


From Google Summer of Code to Google Code-in, the Open Source Programs Office does a lot to get students involved with open source. In order to learn more about supporting open source in academia, I attended the NSF funded Professors' Open Source Software Experience (POSSE) in Philadelphia. It was a great opportunity for us to better understand the challenges instructors face in weaving open source into their curriculum and hear solutions on how to bridge the gap.

Almost 30 university professors and community college lecturers attended the 3-day workshop. During the workshop, attendees worked in small groups getting hands on experience incorporating humanitarian free and open source software (HFOSS) into their teaching. Professors were able to talk, mingle and share best practices throughout the event.

The POSSE workshop is led by Heidi Ellis, Professor, Department of Computer Science and Information Technology at Western New England University, and Greg Hislop, Professor of Software Engineering and Senior Associate Dean for Academic Affairs at Drexel University. Heidi and Greg took over running POSSE five years after Red Hat began the program as an outreach effort to the higher education community. Red Hat continues as a collaborator in the effort. Around 40 university and community college professors participate in the program every year with over 100 individuals attending the workshop in the last four years.

Here are some of the challenges professors shared:
  • Very little guidance on how to bring FOSS into the classroom. No standard curriculum / syllabus available to reference. 
  • Time investment required to change the curriculum.
  • Will not be rewarded for teaching FOSS courses.
  • Will not get funds to travel for workshops/conferences unless it’s to present a paper at a conference.
  • Many administrations aren’t aware that adding open source is beneficial for students since more and more companies use open source and expect their new hires to be familiar with it.

The next POSSE will be Nov 17-19. Faculty who are interested in attending POSSE, please click here to apply.

We also discussed a number of open source programs that are currently working to engage students with open source software development:

Thanks to Heidi, Greg and the FOSS2Serve team for organizing POSSE 2016! We look forward to taking what we’ve learned and using it to better support FOSS education in academia.

By Feiran Helen Hu, Open Source Programs Office

.