opensource.google.com

Menu

Posts from March 2018

My first open source project and Google Code-in

Wednesday, March 28, 2018

This is a guest post from a mentor with coala, an open source tool for linting and fixing code in many different languages, which participated in Google Code-in 2017.

About two years ago, my friend Gyan and I built a small web app which checked whether or not a given username was available on a few popular social media websites. The idea was simple: judge availability of the username on the basis of an HTTP response. Here’s a pseudo-code example:
website_url = form_website_url(website, username)
# Eg: form_website_url('github', 'manu-chroma') returns 'github.com/manu-chroma'

if website_url_response.http_code == 404:
  username available
else:
  username taken
Much to our delight, it worked! Well, almost. It had a lot of bugs but we didn’t care much at the time. It was my first Python project and the first time I open sourced my work. I always look back on it as a cool idea, proud that I made it and learned a lot in the process.

But the project had been abandoned until John from coala approached me. John suggested we use it for Google Code-in because one of coala’s tasks for the students was to create accounts on a few common coding related websites. Students could use the username availability tool to find a good single username–people like their usernames to be consistent across websites–and coala could use it to verify that the accounts were created.

I had submitted a few patches to coala in the past, so this sounded good to me! The competition clashed with my vacation plans, but I wanted to get involved, so I took the opportunity to become a mentor.

Over the course of the program, students not only used the username availability tool but they also began making major improvements. We took the cue and began adding tasks specifically about the tool. Here are just a few of the things students added:
  • Regex to determine whether a given username was valid for any given website
  • More websites, bringing it to a total of 13
  • Tests (!)
The web app is online so you can check username availability too!

I had such a fun time working with students in Google Code-in, their enthusiasm and energy was amazing. Special thanks to students Andrew, Nalin, Joshua, and biscuitsnake for all the time and effort you put into the project. You did really useful work and I hope you learned from the experience!

I want to thank John for approaching me in the first place and suggesting we use and improve the project. He was an unstoppable force throughout the competition, helping both students and fellow mentors. John even helped me with code reviews to really refine the work students submitted, and help them improve based on the feedback.

Kudos to the Google Open Source team for organizing it so well and lowering the barriers of entry to open source for high school students around the world.

By Manvendra Singh, coala mentor

A galactic experience in Google Code-in 2017

Monday, March 26, 2018

This is a guest post from Liquid Galaxy, one of the organizations that participated in both Google Summer of Code and Google Code-in 2017.

Liquid Galaxy, an open source project that powers panoramic views spanning multiple computers and displays, has been participating in Google Summer of Code (GSoC) since 2011. However, we never applied to participate in Google Code-in (GCI) because we heard stories from other projects about long hours and interrupted holidays in service of mentoring eager young students.

That changed in 2017! And, while the stories are true, we have to say it’s also an amazing and worthwhile experience.

It was hard for our small project to recruit the number of mentors needed. Thankfully, our GSoC mentors stepped up, as did many former GSoC students. We even had forward thinking students who were interested in participating in GSoC 2018 volunteer to mentor! While it was challenging, our team of mentors helped us have a nearly flawless GCI experience.

The Google Open Source team only had to nudge us once, when a student’s task had been pending review for more than 36 hours. We’re pretty happy with that considering we had nearly 500 tasks completed over the 50 days of the contest.

More important than our experience, though, is the student experience. We learned a lot, seeing how they chose tasks, the attention to detail some of them put into their work, and the level of interaction between the students and the mentors. Considering these were young students, ranging in age from 13 to 17, they far exceeded our expectations.

There was one piece of advice the Google Open Source team gave us that we didn’t understand as GCI newbies: have a large number of tasks ready from day one, and leave some unpublished until the halfway point. That ended up being key, it ensured we had enough tasks for the initial flood of students and some in reserve for the second flood around the holidays. Our team of mentors worked hard from the moment we were accepted into GCI to the moment we began to create over 150 tasks in five different categories. Students seemed to think we did a good job and told us they enjoyed the variety of tasks and level of difficulty.

We’re glad we finally participated in Google Code-in and we’ll definitely be applying next time! You can learn more about the project and the students who worked with us on our blog.

By Andreu Ibáñez, Liquid Galaxy org admin

Cloudprober: open source black-box monitoring software

Friday, March 23, 2018

Ever wonder if users can actually access your microservices? Observe timeouts in your applications, and not sure if it's the network or if your servers are too busy? Curious about the 99%-ile network latency between your on-premise data center and services running in the cloud?

Cloudprober, which we open sourced last year, answers questions like these and more. It’s black-box monitoring software that "probes" your systems and services and generates metrics based on probe results. This kind of monitoring strategy doesn’t make assumptions about how your service is implemented and it works at the same layer as your service’s users. You can make changes to your service’s implementation with peace of mind, knowing you’ll notice if a change prevents users from accessing the service.

A probe can be anything: a ping, an HTTP request, or even a custom program that mimics how your services are consumed (for example, creating and accessing a blog post). Cloudprober builds and exports standard metrics, and provides a way to easily integrate them with your existing monitoring stack, such as Prometheus-Grafana, Stackdriver and soon InfluxDB. Cloudprober is written in Go and works on all major platforms: Linux, Mac OS, and Windows. It's released as a static binary as well as a Docker image.

Here’s an example probe config that runs an HTTP probe against your forwarding rules and exports data to Stackdriver and Prometheus:
probe {
  name: "internal-web"
  type: HTTP
  # Probe all forwarding rules that contain web-fr in their name.
  targets {
    gce_targets {
      forwarding_rules {}
    }
    regex: "web-fr-.*"
  }
  interval_msec: 5000
  timeout_msec: 1000
  http_probe {
    port: 8080
  }
}

// Export data to stackdriver
surfacer {
  type: STACKDRIVER
}

// Prometheus exporter
surfacer {
  type: PROMETHEUS
}

The probe config is run like this from the command-line:
./cloudprober --config_file $HOME/cloudprober/cloudprober.cfg

This example probe config highlights two major features of Cloudprober: automatic, continuous discovery of cloud targets, and data export over multiple channels (Stackdriver and Prometheus in this case). Cloud deployments are dynamic and are often changing constantly. Cloudprober's dynamic target discovery feature ensures you have one less thing to worry about when doing minor infrastructure changes. Data export in various formats helps it integrate well with your existing monitoring setup.

Other features include:
  • Go text templates based configuration which adds programming capability to configs, such as "for" loops and conditionals
  • Fast and efficient implementation of core probe types
  • Custom probes through the "external" probe type
  • The ability to read config through metadata
  • And cloud (Stackdriver) logging
Though most of the cloud support is specific to Google Cloud Platform (GCP), it’s easy to add support for other providers. Cloudprober has an extensible architecture so you can add new types of targets, probes and monitoring backends.

Cloudprober was built by the Cloud Networking Site Reliability Engineering (SRE) team at Google to monitor network availability and associated features. Today, it's used by several other Google Cloud SRE teams as well.

We’re excited to share Cloudprober with the wider devops community! You can find more examples in the GitHub repository and more information on the project website.

By Manu Garg, Cloud Networking Team

Open sourcing GTXiLib, an accessibility test automation framework for iOS

Wednesday, March 21, 2018

Google believes everyone should be able to access and enjoy the web. We share guidance on building accessible tech over at Google Accessibility and we recently launched a dedicated disability support team. Today, we’re excited to announce that we’ve open sourced GTXiLib, an accessibility test automation framework for iOS, under the Apache license.

We want our products to be accessible and automation, with frameworks like GTXiLib, is one of the ways we scale our accessibility testing. GTXiLib can automate the process of checking for some kinds of issues such as missing labels, hints, or low contrast text.

GTXiLib is written in Objective-C and will integrate with your existing XCTests to perform all the registered accessibility checks before the test tearDown. When the checks fail, the existing test fails as well. Fixing your tests will thus lead to better accessibility and your tests can catch new accessibility issues as well.
  • Reuse your tests: GTXiLib integrates into your existing functional tests, enhancing the value of any tests that you have or any that you write.
  • Incremental accessibility testing: GTXiLib can be installed onto a single test case, test class or a specific subset of tests giving you the freedom to add accessibility testing incrementally. This helped drive GTXiLib adoption in large projects at Google.
  • Author your own checks: GTXiLib has a simple API to create custom checks based on the specific needs of your app. For example, you can ensure every button in your app has an accessibilityHint using a custom check.
Do you also care about accessibility? Help us sharpen GTXiLib by suggesting a check or better yet, writing one. You can add GTXiLib to your project using CocoaPods or by using its Xcode project file.

We hope you find this useful and look forward to feedback and contributions from the community! Please check out the README for more information.

By Siddartha Janga, Google Central Accessibility Team 

Celebrating open source mentorship with Joomla

Tuesday, March 20, 2018

Let’s marvel for a moment: as Google Summer of Code (GSoC) 2018 begins, 46 of the participating open source organizations are celebrating a decade or more with the program. There are 586 collective years of mentorship between them, and that’s just through GSoC.

Free and open source software projects have been doing outreach and community building since the beginning. The free software movement has been around for 35 years, and open source has been around for 20.

Bringing new people into open source is necessary for project health and sustainability, but it’s not easy. It takes time and effort to prepare onboarding materials and mentor people. It takes personal dedication, a welcoming culture, and a commitment to institutional knowledge. Sustained volunteerism at this scale is worthy of celebration!

Joomla is one open source project that exemplifies this and Puneet Kala is one such person. Joomla, a web content management system (CMS) that was first released in 2005, is now on their 11th year of GSoC. More than 80 students have participated over the years. Most students are still actively contributing, and many have gone on to become mentors.

Puneet, now Joomla’s GSoC team lead, began with the project as a student five years ago. He sent along this article celebrating their 10th anniversary, which includes links to interviews with other students who have become mentors, and this panel discussion from Joomla World Conference.

It’s always great to hear from the people who have participated in Google Summer of Code. The stories are inspiring and educational. They know a thing or two about building open source communities, so we share what they have to say: you can find guest posts here.

We’d like to extend our heartfelt thanks to the 608 open source organizations and 12,000 organization administrators and mentors who have been a part of GSoC so far. We’d also like to applaud the 46 organizations that have 10+ years under their belts!

Your tireless investment in the future of people and open source is a testament to generosity.

By Josh Simmons, Google Open Source

Coding your way into cinemas

Monday, March 19, 2018

This is a guest post from apertus° and TimVideos.us, open source organizations that participated in Google Summer of Code last year and are back for 2018!

The apertus° AXIOM project is bringing the world’s first open hardware/free software digital motion picture production camera to life. The project has a rich history, exercises a steadfast adherence to the open source ethos, and all aspects of development have always revolved around supporting and utilising free technologies. The challenge of building a sophisticated digital cinema camera was perfect for Google Summer of Code 2017. But let’s start at the beginning: why did the team behind the project embark on their journey?

Modern Cinematography

For over a century film was dominated by analog cameras and celluloid, but in the late 2000’s things changed radically with the adoption of digital projection in cinemas. It was a natural next step, then, for filmmakers to shoot and produce films digitally. Certain applications in science, large format photography and fine arts still hold onto 35mm film processing, but the reduction in costs and improved workflows associated with digital image capture have revolutionised how we create and consume visual content.

The DSLR revolution

Photo by Matthew Pearce
licensed CC SA 2.0.
Filmmaking has long been considered an expensive discipline accessible only to a select few. This all changed with the adoption of movie recording capabilities in digital single-lens reflex (DSLR) cameras. For multinational corporations this “new” feature was a relatively straightforward addition to existing models as most compact digital photo cameras could already record video clips. This was the first time that a large diameter image sensor, a vital component for creating the typical shallow depth of field we consider cinematic, appeared in consumer cameras. In recent times, user groups have stepped up to contribute to the DSLR revolution first-hand, including groups like the Magic Lantern community.

Magic Lantern

Photo by Dave Dugdale licensed CC BY-SA 2.0.
Magic Lantern is a free and open source software add-on that runs from a camera’s SD/CF card. It adds a host of new features to Canon’s DSLRs that weren't included from the factory, such as allowing users to record high-dynamic range (HDR) video or 14-bit uncompressed RAW video. It’s a community project and many filmmakers simply wouldn’t have bought a Canon camera if it weren’t for the features that Magic Lantern pioneered. Because installing Magic Lantern doesn’t replace the stock Canon firmware or modify the read-only memory (ROM) but runs alongside it, it is both easy to remove and carries little risk. Originally developed for filmmaking, Magic Lantern’s feature base has expanded to include tools useful for still photography as well.

Starting the revolution for real 

Of course, Magic Lantern has been held back by the underlying proprietary hardware routines on existing camera models. So, in 2014 a team of developers and filmmakers around the apertus° project joined forces with the Magic Lantern team to lay the foundation for a totally independent, open hardware, free software, digital cinema camera. They ran a successful crowdfunding campaign for initial development, and they completed hardware development of the first developer kits in 2016. Unlike traditional cameras, the AXIOM is designed to be completely modular, and so continuously evolve, thereby preventing it from ever becoming obsolete. How the camera evolves is determined by its user community, with its design files and source code freely available and users encouraged to duplicate, modify and redistribute anything and everything related to the camera.

While the camera is primarily for use in motion picture production, there are many suitable applications where AXIOM can be useful. Individuals in science, astronomy, medicine, aerial mapping, industrial automation, and those who record events or talks at conferences have expressed interest in the camera. A modular and open source device for digital imaging allows users to build a system that meets their unique requirements. One such company for instance, Mavrx Inc, who use aerial imagery to provide actionable insight for the agriculture industry, used the camera because it enabled them to not only process the data more efficiently than comparable camera equivalents, but also to re-configure its form factor so that it could be installed alongside existing equipment configurations.

Google Summer of Code 2017

Continuing their journey, apertus° participated in Google Summer of Code for the first time in 2017. They received about 30 applications from interested students, from which they needed to select three. Projects ranged from field programmable gate array (FPGA) centered video applications to creating Linux kernel drivers for specific camera hardware. Similarly TimVideos.us, an open hardware project for live event streaming and conference recording, is working on FPGA projects around video interfaces and processing.

After some preliminary work, the students came to grips with the camera’s operating processes and all three dove in enthusiastically. One student failed the first evaluation and another failed the second, but one student successfully completed their work.

That student, Vlad Niculescu, worked on defining control loops for a voltage controller using VHSIC Hardware Description Language (VHDL) for a potential future AXIOM Beta Power Board, an FPGA-driven smart switching regulator for increasing the power efficiency and improving flexibility around voltage regulation.
Left: The printed circuit board (PCB) (printed circuit board) for testing the switching regulator FPGA logic. Right: After final improvements the fluctuation ripple in the voltages was reduced to around 30mV at 2V target voltage.
Vlad had this to say about his experience:

“The knowledge I acquired during my work with this project and apertus° was very satisfying. Besides the electrical skills gained I also managed to obtain other, important universal skills. One of the things I learned was that the key to solving complex problems can often be found by dividing them into small blocks so that the greater whole can be easily observed by others. Writing better code and managing the stages of building a complex project have become lessons that will no doubt become valuable in the future. I will always be grateful to my mentor as he had the patience to explain everything carefully and teach me new things step by step, and also to apertus° and Google’s Summer of Code program, without which I may not have gained the experience of working on a project like this one.”

We are grateful for Vlad’s work and congratulate him for successfully completing the program. If you find open hardware and video production interesting, we encourage you to reach out and join the community–both apertus° and TimVideos.us are back for Google Summer of Code 2018.

By Sebastian Pichelhofer, apertus°, and Tim 'mithro' Ansell, TimVideos.us

Googlers on the road: FOSSASIA Summit 2018

Friday, March 16, 2018

In a week’s time, free and open source enthusiasts of all kinds will gather in Singapore for FOSSASIA Summit 2018. Established in 2009, the annual event attracts more than 3,000 attendees, running from March 22nd to 25th this year.

FOSSASIA logo licensed LGPL-2.1.
FOSSASIA Summit is organized by FOSSASIA, a nonprofit organization that focuses on Asia and brings people together around open technology both in-person and online. The organization is also home to many open source projects and is a regular participant in the Google Open Source team’s student programs, Google Summer of Code and Google Code-in.

Our team is excited to be among those attending and speaking at the conference this year, and we’re proud that Google Cloud is a sponsor. If you’re around, please come say hello. The highlight of our travel is meeting the students and mentors who have participated in our programs!

Here are the Googlers who will be giving presentations:

Thursday, March 22nd

2:00pm Real-world Machine Learning with TensorFlow and Cloud ML by Kaz Sato

Friday, March 23rd

9:30am BigQuery codelab by Jan Peuker
10:30am  Working with Cloud DataPrep by KC Ayyagari
1:00pm Extract, analyze & translate Text from Images with Cloud ML APIs by Sara Robinson
2:00pm    Bitcoin in BigQuery: blockchain analytics on public data by Allen Day
2:40pm What can we learn from 1.1 billion GitHub events and 42 TB of code? by Felipe Hoffa2:40pm Engaging IoT solutions with Machine Learning by Markku Lepisto
2:45pm CloudML Engine: Qwik Start by Kaz Sato
3:20pm Systems as choreographed behavior with Kubernetes by Jan Peuker
4:00pm The Assistant by Manikantan Krishnamurthy

Saturday, March 24th

10:30am Google Summer of Code and Google Code-in by Stephanie Taylor
10:30am Zero to ML on Google Cloud Platform by Sara Robinson
11:00am  Building a Sustainable Open Tech Community through Coding Programs, Contests and Hackathons panel including Stephanie Taylor
11:05am Codifying Security and Modern Secrets Management by Seth Vargo
1:00pm    Open Source Education panel including Cat Allman
5:00pm Everything as Code by Seth Vargo

We look forward to seeing you there!

By Josh Simmons, Google Open Source

Semantic Image Segmentation with DeepLab in TensorFlow

Thursday, March 15, 2018

Cross-posted on the Google Research Blog.

Semantic image segmentation, the task of assigning a semantic label, such as “road”, “sky”, “person”, “dog”, to every pixel in an image enables numerous new applications, such as the synthetic shallow depth-of-field effect shipped in the portrait mode of the Pixel 2 and Pixel 2 XL smartphones and mobile real-time video segmentation. Assigning these semantic labels requires pinpointing the outline of objects, and thus imposes much stricter localization accuracy requirements than other visual entity recognition tasks such as image-level classification or bounding box-level detection.


Today, we are excited to announce the open source release of our latest and best performing semantic image segmentation model, DeepLab-v3+ [1]*, implemented in TensorFlow. This release includes DeepLab-v3+ models built on top of a powerful convolutional neural network (CNN) backbone architecture [2, 3] for the most accurate results, intended for server-side deployment. As part of this release, we are additionally sharing our TensorFlow model training and evaluation code, as well as models already pre-trained on the Pascal VOC 2012 and Cityscapes benchmark semantic segmentation tasks.

Since the first incarnation of our DeepLab model [4] three years ago, improved CNN feature extractors, better object scale modeling, careful assimilation of contextual information, improved training procedures, and increasingly powerful hardware and software have led to improvements with DeepLab-v2 [5] and DeepLab-v3 [6]. With DeepLab-v3+, we extend DeepLab-v3 by adding a simple yet effective decoder module to refine the segmentation results especially along object boundaries. We further apply the depthwise separable convolution to both atrous spatial pyramid pooling [5, 6] and decoder modules, resulting in a faster and stronger encoder-decoder network for semantic segmentation.


Modern semantic image segmentation systems built on top of convolutional neural networks (CNNs) have reached accuracy levels that were hard to imagine even five years ago, thanks to advances in methods, hardware, and datasets. We hope that publicly sharing our system with the community will make it easier for other groups in academia and industry to reproduce and further improve upon state-of-art systems, train models on new datasets, and envision new applications for this technology.

By Liang-Chieh Chen and Yukun Zhu, Google Research

Acknowledgements
We would like to thank the support and valuable discussions with Iasonas Kokkinos, Kevin Murphy, Alan L. Yuille (co-authors of DeepLab-v1 and -v2), as well as Mark Sandler, Andrew Howard, Menglong Zhu, Chen Sun, Derek Chow, Andre Araujo, Haozhi Qi, Jifeng Dai, and the Google Mobile Vision team.

References
  1. Encoder-Decoder with Atrous Separable Convolution for Semantic Image Segmentation, Liang-Chieh Chen, Yukun Zhu, George Papandreou, Florian Schroff, and Hartwig Adam, arXiv: 1802.02611, 2018.
  2. Xception: Deep Learning with Depthwise Separable Convolutions, François Chollet, Proc. of CVPR, 2017.
  3. Deformable Convolutional Networks — COCO Detection and Segmentation Challenge 2017 Entry, Haozhi Qi, Zheng Zhang, Bin Xiao, Han Hu, Bowen Cheng, Yichen Wei, and Jifeng Dai, ICCV COCO Challenge Workshop, 2017.
  4. Semantic Image Segmentation with Deep Convolutional Nets and Fully Connected CRFs, Liang-Chieh Chen, George Papandreou, Iasonas Kokkinos, Kevin Murphy, and Alan L. Yuille, Proc. of ICLR, 2015.
  5. Deeplab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs, Liang-Chieh Chen, George Papandreou, Iasonas Kokkinos, Kevin Murphy, and Alan L. Yuille, TPAMI, 2017.
  6. Rethinking Atrous Convolution for Semantic Image Segmentation, Liang-Chieh Chen, George Papandreou, Florian Schroff, and Hartwig Adam, arXiv:1706.05587, 2017.


* DeepLab-v3+ is not used to power Pixel 2's portrait mode or real time video segmentation. These are mentioned in the post as examples of features this type of technology can enable.

Open sourcing Resonance Audio

Wednesday, March 14, 2018

Spatial audio adds to your sense of presence when you’re in VR or AR, making it feel and sound, like you’re surrounded by a virtual or augmented world. And regardless of the display hardware you’re using, spatial audio makes it possible to hear sounds coming from all around you.

Resonance Audio, our spatial audio SDK launched last year, enables developers to create more realistic VR and AR experiences on mobile and desktop. We’ve seen a number of exciting experiences emerge across a variety of platforms using our SDK. Recent examples include apps like Pixar’s Coco VR for Gear VR, Disney’s Star Wars™: Jedi Challenges AR app for Android and iOS, and Runaway’s Flutter VR for Daydream, which all used Resonance Audio technology.

To accelerate adoption of immersive audio technology and strengthen the developer community around it, we’re opening Resonance Audio to a community-driven development model. By creating an open source spatial audio project optimized for mobile and desktop computing, any platform or software development tool provider can easily integrate with Resonance Audio. More cross-platform and tooling support means more distribution opportunities for content creators, without the worry of investing in costly porting projects.

What’s Included in the Open Source Project

As part of our open source project, we’re providing a reference implementation of YouTube’s Ambisonic-based spatial audio decoder, compatible with the same Ambisonics format (Ambix ACN/SN3D) used by others in the industry. Using our reference implementation, developers can easily render Ambisonic content in their VR media and other applications, while benefiting from Ambisonics open source, royalty-free model. The project also includes encoding, sound field manipulation and decoding techniques, as well as head related transfer functions (HRTFs) that we’ve used to achieve rich spatial audio that scales across a wide spectrum of device types and platforms. Lastly, we’re making our entire library of highly optimized DSP classes and functions, open to all. This includes resamplers, convolvers, filters, delay lines and other DSP capabilities. Additionally, developers can now use Resonance Audio’s brand new Spectral Reverb, an efficient, high quality, constant complexity reverb effect, in their own projects.

We’ve open sourced Resonance Audio as a standalone library and associated engine plugins, VST plugin, tutorials, and examples with the Apache 2.0 license. This means Resonance Audio is yours, so you’re free to use Resonance Audio in your projects, no matter where you work. And if you see something you’d like to improve, submit a GitHub pull request to be reviewed by the Resonance Audio project committers. While the engine plugins for Unity, Unreal, FMOD, and Wwise will remain open source, going forward they will be maintained by project committers from our partners, Unity, Epic, Firelight Technologies, and Audiokinetic, respectively.

If you’re interested in learning more about Resonance Audio, check out the documentation on our developer site. If you want to get more involved, visit our GitHub to access the source code, build the project, download the latest release, or even start contributing. We’re looking forward to building the future of immersive audio with all of you.

By Eric Mauskopf, Google AR/VR Team

The value of OpenCensus

Tuesday, March 13, 2018

This post is the second in a series about OpenCensus. You can find the first post here.

Early this year we open sourced OpenCensus, a distributed tracing and stats instrumentation framework. Today, we continue our journey by discussing the history and motivation behind the project here at Google, and what benefits OpenCensus has to offer. As OpenCensus continues to gain partners we’ll be shifting the focus away from Google, but we wanted to use this post as an opportunity to answer some of the questions that we’re most commonly asked at meetings and events.

Why did Google open source this? Why now?

Google open sources a lot of projects and we’ve begun documenting some of the reasons why on the Google Open Source website. What about OpenCensus specifically? There are many reasons it made sense for us to release this project and get others involved.

We had already released other related projects. The Census team had been eager to share their work with the public for a while. With projects like gRPC and Istio, going open source, it made sense to release OpenCensus as well.

It helped us serve our customers better. Teams with performance-sensitive APIs like BigTable and Spanner needed more insight into their customers’ calling patterns while debugging issues, and wanted a way to connect customers’ traced requests to equivalent traces inside of Google.

Managing integrations ourselves is costly. The Stackdriver Trace engineering team had been investing considerable resources building their own instrumentation libraries across seven languages, and it became apparent that the cost of building and maintaining integrations into web and RPC frameworks would continue in perpetuity. Releasing these libraries might encourage framework providers to manage these integrations instead.

We have a vested interest in everyone else’s reliability and performance. As a web search and cloud services provider, Google’s users benefit as web services and applications become increasingly reliable and performant. Popularizing distributed tracing and app-level metrics is a one way to achieve this. This is especially important with the rising popularity of microservices-based architectures which are difficult to debug without distributed tracing.

This expands the market for other services. By making tracing and app-level metrics more accessible, we grow the overall market for monitoring and application performance management (APM) tools, which benefits Stackdriver Monitoring and Stackdriver Trace.

As these factors came into focus, the decision to open source the project became clear.

Benefits to Partners and the Community

Google’s reasons for developing and promoting OpenCensus apply to partners at all levels.

Service developers reap the benefits of having automatic traces and stats collection, along with vendor-neutral APIs for manually interacting with these. Developers who use open source backends like Prometheus or Zipkin benefit from having a single set of well-supported instrumentation libraries that can export to both services at once.

For APM vendors, being able to take advantage of already-provided language support and framework integrations is huge, and the exporter API allows traces and metrics to be sent to an ingestion API without much additional work. Developers who might have been working on instrumentation code can now focus on other more important tasks, and vendors get traces and metrics back from places they previously didn’t have coverage for.

Cloud and API providers have the added benefit of being able to include OpenCensus in client libraries, allowing customers to gain insight into performance characteristics and debug issues without having to contact support. In situations where customers were still not able to diagnose their own issues, customer traces can be matched with internal traces for faster root cause analysis, regardless of which tracing or APM product they use.

What’s Next

If you missed the first post in our series, you can read it now. In upcoming blog posts and videos we’ll discuss:
  • The current schedule for OpenCensus on a language-by-language basis
  • Guides on how to add custom instrumentation to your application
  • Techniques for adding more automatic integrations to OpenCensus
  • Our long-term vision for OpenCensus
Thanks for reading – we’ll see you on GitHub!

By Pritam Shah and Morgan McLean, Census team

Student applications open for Google Summer of Code 2018

Monday, March 12, 2018

Ready, set, go! Today we begin accepting applications from university students who want to participate in Google Summer of Code (GSoC) 2018. Are you a university student? Want to use your software development skills for good? Read on.

Now entering its 14th year, GSoC gives students from around the globe an opportunity to learn the ins and outs of open source software development while working from home. Students receive a stipend for successful contribution to allow them to focus on their project for the duration of the program. A passionate community of mentors help students navigate technical challenges and monitor their progress along the way.

Past participants say the real-world experience that GSoC provides sharpened their technical skills, boosted their confidence, expanded their professional network and enhanced their resume.

Interested students can submit proposals on the program site between now and Tuesday, March 27, 2018 at 16:00 UTC.

While many students began preparing in February when we announced the 212 participating open source organizations, it’s not too late to start! The first step is to browse the list of organizations and look for project ideas that appeal to you. Next, reach out to the organization to introduce yourself and determine if your skills and interests are a good fit. Since spots are limited, we recommend writing a strong proposal and submitting a draft early so you can get feedback from the organization and increase the odds of being selected.

You can learn more about how to prepare in the video below and in the Student Guide.


You can find more information on our website, including a full timeline of important dates. We also highly recommend perusing the FAQ and Program Rules, as well as joining the discussion mailing list.

Remember to submit your proposals early as you only have until Tuesday, March 27 at 16:00 UTC. Good luck to all who apply!

By Josh Simmons, Google Open Source

Open Sourcing the Hunt for Exoplanets

Friday, March 9, 2018



Recently, we discovered two exoplanets by training a neural network to analyze data from NASA’s Kepler space telescope and accurately identify the most promising planet signals. And while this was only an initial analysis of ~700 stars, we consider this a successful proof-of-concept for using machine learning to discover exoplanets, and more generally another example of using machine learning to make meaningful gains in a variety of scientific disciplines (e.g. healthcare, quantum chemistry, and fusion research).

Today, we’re excited to release our code for processing the Kepler data, training our neural network model, and making predictions about new candidate signals. We hope this release will prove a useful starting point for developing similar models for other NASA missions, like K2 (Kepler’s second mission) and the upcoming Transiting Exoplanet Survey Satellite mission. As well as announcing the release of our code, we’d also like take this opportunity to dig a bit deeper into how our model works.

A Planet Hunting Primer

First, let’s consider how data collected by the Kepler telescope is used to detect the presence of a planet. The plot below is called a light curve, and it shows the brightness of the star (as measured by Kepler’s photometer) over time. When a planet passes in front of the star, it temporarily blocks some of the light, which causes the measured brightness to decrease and then increase again shortly thereafter, causing a “U-shaped” dip in the light curve.
A light curve from the Kepler space telescope with a “U-shaped” dip that indicates a transiting exoplanet.
However, other astronomical and instrumental phenomena can also cause the measured brightness of a star to decrease, including binary star systems, starspots, cosmic ray hits on Kepler’s photometer, and instrumental noise.
The first light curve has a “V-shaped” pattern that tells us that a very large object (i.e. another star) passed in front of the star that Kepler was observing. The second light curve contains two places where the brightness decreases, which indicates a binary system with one bright and one dim star: the larger dip is caused by the dimmer star passing in front of the brighter star, and vice versa. The third light curve is one example of the many other non-planet signals where the measured brightness of a star appears to decrease.
To search for planets in Kepler data, scientists use automated software (e.g. the Kepler data processing pipeline) to detect signals that might be caused by planets, and then manually follow up to decide whether each signal is a planet or a false positive. To avoid being overwhelmed with more signals than they can manage, the scientists apply a cutoff to the automated detections: those with signal-to-noise ratios above a fixed threshold are deemed worthy of follow-up analysis, while all detections below the threshold are discarded. Even with this cutoff, the number of detections is still formidable: to date, over 30,000 detected Kepler signals have been manually examined, and about 2,500 of those have been validated as actual planets!

Perhaps you’re wondering: does the signal-to-noise cutoff cause some real planet signals to be missed? The answer is, yes! However, if astronomers need to manually follow up on every detection, it’s not really worthwhile to lower the threshold, because as the threshold decreases the rate of false positive detections increases rapidly and actual planet detections become increasingly rare. However, there’s a tantalizing incentive: it’s possible that some potentially habitable planets like Earth, which are relatively small and orbit around relatively dim stars, might be hiding just below the traditional detection threshold — there might be hidden gems still undiscovered in the Kepler data!

A Machine Learning Approach

The Google Brain team applies machine learning to a diverse variety of data, from human genomes to sketches to formal mathematical logic. Considering the massive amount of data collected by the Kepler telescope, we wondered what we might find if we used machine learning to analyze some of the previously unexplored Kepler data. To find out, we teamed up with Andrew Vanderburg at UT Austin and developed a neural network to help search the low signal-to-noise detections for planets.
We trained a convolutional neural network (CNN) to predict the probability that a given Kepler signal is caused by a planet. We chose a CNN because they have been very successful in other problems with spatial and/or temporal structure, like audio generation and image classification.
Luckily, we had 30,000 Kepler signals that had already been manually examined and classified by humans. We used a subset of around 15,000 of these signals, of which around 3,500 were verified planets or strong planet candidates, to train our neural network to distinguish planets from false positives. The inputs to our network are two separate views of the same light curve: a wide view that allows the model to examine signals elsewhere on the light curve (e.g., a secondary signal caused by a binary star), and a zoomed-in view that enables the model to closely examine the shape of the detected signal (e.g., to distinguish “U-shaped” signals from “V-shaped” signals).

Once we had trained our model, we investigated the features it learned about light curves to see if they matched with our expectations. One technique we used (originally suggested in this paper) was to systematically occlude small regions of the input light curves to see whether the model’s output changed. Regions that are particularly important to the model’s decision will change the output prediction if they are occluded, but occluding unimportant regions will not have a significant effect. Below is a light curve from a binary star that our model correctly predicts is not a planet. The points highlighted in green are the points that most change the model’s output prediction when occluded, and they correspond exactly to the secondary “dip” indicative of a binary system. When those points are occluded, the model’s output prediction changes from ~0% probability of being a planet to ~40% probability of being a planet. So, those points are part of the reason the model rejects this light curve, but the model uses other evidence as well - for example, zooming in on the centred primary dip shows that it's actually “V-shaped”, which is also indicative of a binary system.

Searching for New Planets

Once we were confident with our model’s predictions, we tested its effectiveness by searching for new planets in a small set 670 stars. We chose these stars because they were already known to have multiple orbiting planets, and we believed that some of these stars might host additional planets that had not yet been detected. Importantly, we allowed our search to include signals that were below the signal-to-noise threshold that astronomers had previously considered. As expected, our neural network rejected most of these signals as spurious detections, but a handful of promising candidates rose to the top, including our two newly discovered planets: Kepler-90 i and Kepler-80 g.

Find your own Planet(s)!

Let’s take a look at how the code released today can help (re-)discover the planet Kepler-90 i. The first step is to train a model by following the instructions on the code’s home page. It takes a while to download and process the data from the Kepler telescope, but once that’s done, it’s relatively fast to train a model and make predictions about new signals. One way to find new signals to show the model is to use an algorithm called Box Least Squares (BLS), which searches for periodic “box shaped” dips in brightness (see below). The BLS algorithm will detect “U-shaped” planet signals, “V-shaped” binary star signals and many other types of false positive signals to show the model. There are various freely available software implementations of the BLS algorithm, including VARTOOLS and LcTools. Alternatively, you can even look for candidate planet transits by eye, like the Planet Hunters.
A low signal-to-noise detection in the light curve of the Kepler 90 star detected by the BLS algorithm. The detection has period 14.44912 days, duration 2.70408 hours (0.11267 days) beginning 2.2 days after 12:00 on 1/1/2009 (the year the Kepler telescope launched).
To run this detected signal though our trained model, we simply execute the following command:
python predict.py  --kepler_id=11442793 --period=14.44912 --t0=2.2
--duration=0.11267 --kepler_data_dir=$HOME/astronet/kepler 
--output_image_file=$HOME/astronet/kepler-90i.png 
--model_dir=$HOME/astronet/model
The output of the command is prediction = 0.94, which means the model is 94% certain that this signal is a real planet. Of course, this is only a small step in the overall process of discovering and validating an exoplanet: the model’s prediction is not proof one way or the other. The process of validating this signal as a real exoplanet requires significant follow-up work by an expert astronomer — see Sections 6.3 and 6.4 of our paper for the full details. In this particular case, our follow-up analysis validated this signal as a bona fide exoplanet, and it’s now called Kepler-90 i!
Our work here is far from done. We’ve only searched 670 stars out of 200,000 observed by Kepler — who knows what we might find when we turn our technique to the entire dataset. Before we do that, though, we have a few improvements we want to make to our model. As we discussed in our paper, our model is not yet as good at rejecting binary stars and instrumental false positives as some more mature computer heuristics. We’re hard at work improving our model, and now that it’s open sourced, we hope others will do the same!


By Chris Shallue, Senior Software Engineer, Google Brain Team

If you’d like to learn more, Chris is featured on the latest episode of This Week In Machine Learning & AI discussing his work.

A year full of new open source at Catrobat

Thursday, March 8, 2018

This is a guest post from Catrobat, an open source organization that participated in both Google Summer of Code and Google Code-in last year.


Catrobat was selected to participate in Google Summer of Code (GSoC) for the sixth time and Google Code-in (GCI) for the first time in 2017, which helped us reach new students and keep our mentors busy.

We tried something new in 2017 by steering GSoC students toward refactoring and performance, rather than developing new features. Implementing a crash tracking and analysis system, modularizing existing code, and rewriting our tests resulted in more lines of code being deleted than added – and we’re really happy about that!

This improved the quality and stability of oursoftware and both students and mentors could see progress immediately. The immediacy of the results kept students engaged - some weeks it almost seemed as if they had been working 24/7 (they weren’t :)! And we’re happy to say that most are still motivated to contribute after GSoC, and now they’re adding code more often than they are deleting it.

Although new features are exciting, we found that working on existing code offers a smooth entry for GSoC students. This approach helped students assimilate into the community and project more quickly, as well as receive rapid rewards for their work.

The quality improvements made by GSoC students also made things smoother for the younger, often less experienced GCI students. Several dozen students completed hundreds of tasks, spreading the love of open source and coding in their communities. It was our first time working with so many young contributors and it was fun!

We faced challenges in the beginning – such as language barriers and students’ uncertainty in their work – and quickly learned how to adapt our processes to meet the needs (and extraordinary motivation) of these new young contributors. We introduced them to open source through our project’s app Pocket Code, allowing them to program games and apps with a visual mobile coding framework and then share them under an open license. Students had a lot of fun starting this way and mentors enjoyed reviewing so many colorful and exciting games.

Students even asked how they could improve on quality work that we had already accepted, if they could do more work on it, and if they could share their projects with their friends. This was a great first experience of GCI for our organization and, as one of our mentors mentioned in the final evaluation phase, we would totally be up for doing it again!

By Matthias Mueller, Catrobat Org Admin

How Google uses Census internally

Wednesday, March 7, 2018

This post is the first in a series about OpenCensus, a set of open source instrumentation libraries based on what we use inside Google. This series will cover the benefits of OpenCensus for developers and vendors, Google’s interest in open sourcing instrumentation tools, how to get started with OpenCensus, and our long-term vision.

If you’re new to distributed tracing and metrics, we recommend Adrian Cole’s excellent talk on the subject: Observability Three Ways.

Gaining Observability into Planet-Scale Computing

Google adopted or invented new technologies, including distributed tracing (Dapper) and metrics processing, in order to operate some of the world’s largest web services. However, building analysis systems didn’t solve the difficult problem of instrumenting and extracting data from production services. This is what Census was created to do.

The Census project provides uniform instrumentation across most Google services, capturing trace spans, app-level metrics, and other metadata like log correlations from production applications. One of the biggest benefits of uniform instrumentation to developers inside of Google is that it’s almost entirely automatic: any service that uses gRPC automatically collects and exports basic traces and metrics.

OpenCensus offers these capabilities to developers everywhere. Today we’re sharing how we use distributed tracing and metrics inside of Google.

Incident Management

When latency problems or new errors crop up in a highly distributed environment, visibility into what’s happening is critical. For example, when the latency of a service crosses expected boundaries, we can view distributed traces in Dapper to find where things are slowing down. Or when a request is returning an error, we can look at the chain of calls that led to the error and examine the metadata captured during a trace (typically logs or trace annotations). This is effectively a bigger stack trace. In rare cases, we enable custom trigger-based sampling which allows us to focus on specific kinds of requests.

Once we know there’s a production issue, we can use Census data to determine the regions, services, and scope (one customer vs many) of a given problem. You can use service-specific diagnostics pages, called “z-pages,” to monitor problems and the results of solutions you deploy. These pages are hosted locally on each service and provide a firehose view of recent requests, stats, and other performance-related information.

Performance Optimization

At Google’s scale, we need to be able to instrument and attribute costs for services. We use Census to help us answer questions like:
  • How much CPU time does my query consume?
  • Does my feature consume more storage resources than before?
  • What is the cost of a particular user operation at a particular layer of the stack?
  • What is the total cost of a particular user operation across all layers of the stack?
We’re obsessed with reducing the tail latency of all services, so we’ve built sophisticated analysis systems that process traces and metrics captured by Census to identify regressions and other anomalies.

Quality of Service

Google also improves performance dynamically depending on the source and type of traffic. Using Census tags, traffic can be directed to more appropriate shards, or we can do things like load shedding and rate limiting.

Next week we’ll discuss Google’s motivations for open sourcing Census, then we’ll shift the focus back onto the open source project itself.

By Pritam Shah and Morgan McLean, Census team

The Building Blocks of Interpretability

Tuesday, March 6, 2018

Cross-posted on the Google Research Blog.

In 2015, our early attempts to visualize how neural networks understand images led to psychedelic images. Soon after, we open sourced our code as DeepDream and it grew into a small art movement producing all sorts of amazing things. But we also continued the original line of research behind DeepDream, trying to address one of the most exciting questions in Deep Learning: how do neural networks do what they do?

Last year in the online journal Distill, we demonstrated how those same techniques could show what individual neurons in a network do, rather than just what is “interesting to the network” as in DeepDream. This allowed us to see how neurons in the middle of the network are detectors for all sorts of things — buttons, patches of cloth, buildings — and see how those build up to be more and more sophisticated over the networks layers.
Visualizations of neurons in GoogLeNet. Neurons in higher layers represent higher level ideas.
While visualizing neurons is exciting, our work last year was missing something important: how do these neurons actually connect to what the network does in practice?

Today, we’re excited to publish “The Building Blocks of Interpretability,” a new Distill article exploring how feature visualization can combine together with other interpretability techniques to understand aspects of how networks make decisions. We show that these combinations can allow us to sort of “stand in the middle of a neural network” and see some of the decisions being made at that point, and how they influence the final output. For example, we can see things like how a network detects a floppy ear, and then that increases the probability it gives to the image being a “Labrador retriever” or “beagle”.

We explore techniques for understanding which neurons fire in the network. Normally, if we ask which neurons fire, we get something meaningless like “neuron 538 fired a little bit,” which isn’t very helpful even to experts. Our techniques make things more meaningful to humans by attaching visualizations to each neuron, so we can see things like “the floppy ear detector fired”. It’s almost a kind of MRI for neural networks.
We can also zoom out and show how the entire image was “perceived” at different layers. This allows us to really see the transition from the network detecting very simple combinations of edges, to rich textures and 3d structure, to high-level structures like ears, snouts, heads and legs.
These insights are exciting by themselves, but they become even more exciting when we can relate them to the final decision the network makes. So not only can we see that the network detected a floppy ear, but we can also see how that increases the probability of the image being a labrador retriever.
In addition to our paper, we’re also releasing Lucid, a neural network visualization library building off our work on DeepDream. It allows you to make the sort of lucid feature visualizations we see above, in addition to more artistic DeepDream images.

We’re also releasing colab notebooks. These notebooks make it extremely easy to use Lucid to reproduce visualizations in our article! Just open the notebook, click a button to run code — no setup required!
In colab notebooks you can click a button to run code, and see the result below.
This work only scratches the surface of the kind of interfaces that we think it’s possible to build for understanding neural networks. We’re excited to see what the community will do — and we’re excited to work together towards deeper human understanding of neural networks.

By Chris Olah, Research Scientist and Arvind Satyanarayan, Visiting Researcher, Google Brain Team

Congratulating the latest Open Source Peer Bonus winners

Thursday, March 1, 2018

We’re pleased to introduce 2018’s first round of Open Source Peer Bonus winners. First started by the Google Open Source team seven years ago, this program encourages Google employees to express their gratitude to open source contributors.

Twice a year Googlers nominate open source contributors outside of the company for their contributions to open source projects, including those used by Google. Nominees are reviewed by a team of volunteers and the winners receive our heartfelt thanks with a token of our appreciation.

So far more than 600 contributors from dozens of countries have received Open Source Peer Bonuses for volunteering their time and talent to over 400 open source projects. You can find some of the previous winners in these blog posts.

We’d like to recognize the latest round of winners and the projects they worked on. Listed below are the individuals who gave us permission to thank them publicly:

Name Project Name Project
Adrien Devresse Abseil C++ Friedel Ziegelmayer Karma
Weston Ruter AMP Plugin for WordPress Davanum Srinivas Kubernetes
Thierry Muller AMP Plugin for WordPress Jennifer Rondeau Kubernetes
Adam Silverstein AMP Project Jessica Yao Kubernetes
Levi Durfee AMP Project Qiming Teng Kubernetes
Fabian Wiles Angular Zachary Corleissen Kubernetes
Paul King Apache Groovy Reinhard Nägele Kubernetes Charts
Eric Eide C-Reduce Erez Shinan Lark
John Regehr C-Reduce Alex Gaynor Mercurial
Yang Chen C-Reduce Anna Henningsen Node.js
Ajith Kumar Velutheri Chromium Michaël Zasso Node.js
Orta Therox CocoaPods Michael Dalessio Nokogiri
Idwer Vollering coreboot Gina Häußge OctoPrint
Paul Ganssle dateutil Michael Stramel Polymer
Zach Leatherman Eleventy La Vesha Parker Progressive HackNight
Daniel Stone freedesktop.org Ian Stapleton Cordasco Python Code Quality Authority
Sergiu Deitsch glog Fabian Henneke Secure Shell
Jonathan Bluett-Duncan Guava Rob Landley Toybox
Karol Szczepański Heracles.ts Peter Wong V8
Paulus Schoutsen Home Assistant Timothy Gu Web platform & Node.js
Nathaniel Welch Fog for Google Ola Hugosson WebM
Shannon Coen Istio Dominic Symes WebM & AOMedia
Max Beatty jsPerf

To each and every one of you: thank you for your contributions to the open source community and congratulations!

By Maria Webb, Google Open Source
.