opensource.google.com

Menu

Posts from 2017

Authenticating to HashiCorp Vault using Google Cloud IAM

Wednesday, August 16, 2017

Applications often require access to small pieces of sensitive data at build or run time, referred to as secrets. Secrets are generally more sensitive than other environment variables or parts of your repository as they may grant access to additional data, such as user data.

HashiCorp Vault is a popular open source tool for secret management, which allows a developer to store, manage and control access to tokens, passwords, certificates, API keys and other secrets. Vault has many options for authentication, called authentication backends. These allow developers to use many kinds of identities to access Vault, including tokens, or usernames and passwords. As the number of developers on a team grows, these kinds of authentication options become impractical; and in enterprise scenarios, managing and auditing these identities becomes burdensome.

Today, we are pleased to announce a Google Cloud Platform IAM authentication backend for Vault. This allows a developer to use an existing IAM identity to authenticate to Vault. Using a service account, you can sign a JWT to show it came from a particular account, and use that to authenticate to Vault. Learn more in the documentation.


The following example in Go shows how a user can authenticate with Vault using this backend. This example assumes the Vault server has already been mounted at auth/gcp and configured.
package main

import (
 ...
 vaultapi "github.com/hashicorp/vault/api"
 "golang.org/x/oauth2"
 "golang.org/x/oauth2/google"
 "google.golang.org/api/iam/v1"
 ...
)

func main() {
 // Start [PARAMS]
 project := "project-123456"
 serviceAccount := "myserviceaccount@project-123456.iam.gserviceaccount.com"
 credsPath := "path/to/creds.json"

 os.Setenv("VAULT_ADDR", "https://vault.mycompany.com")
 defer os.Setenv("VAULT_ADDR", "")
 // End [PARAMS]

 // Start [GCP IAM Setup]
 jsonBytes, err := ioutil.ReadFile(credsPath)
 if err != nil {
  log.Fatal(err)
 }
 config, err := google.JWTConfigFromJSON(jsonBytes, iam.CloudPlatformScope)
 if err != nil {
  log.Fatal(err)
 }

 httpClient := config.Client(oauth2.NoContext)
 iamClient, err := iam.New(httpClient)
 if err != nil {
  log.Fatal(err)
 }
 // End [GCP IAM Setup]
 
 // 1. Generate signed JWT using IAM.
 resourceName := fmt.Sprintf("projects/%s/serviceAccounts/%s", project, serviceAccount)
 jwtPayload := map[string]interface{}{
  "aud": "auth/gcp/login",
  "sub": serviceAccount,
  "exp": time.Now().Add(time.Minute * 10).Unix(),
 }

 payloadBytes, err := json.Marshal(jwtPayload)
 if err != nil {
  log.Fatal(err)
 }
 signJwtReq := &iam.SignJwtRequest{
  Payload: string(payloadBytes),
 }

 resp, err := iamClient.Projects.ServiceAccounts.SignJwt(
resourceName, signJwtReq).Do()
 if err != nil {
  log.Fatal(err)
 }

 // 2. Send signed JWT in login request to Vault.
 vaultClient, err := vaultapi.NewClient(vaultapi.DefaultConfig())
 if err != nil {
  log.Fatal(err)
 }

 vaultResp, err := vaultClient.Logical().Write(
"auth/gcp/login", 
map[string]interface{}{
   "role": "test",
   "jwt":  resp.SignedJwt,
  })

 if err != nil {
  log.Fatal(err)
 }

 // 3. Use auth token from response.
 log.Println("Access token %s", vaultResp.Auth.ClientToken)
 vaultClient.SetToken(vaultResp.Auth.ClientToken)
 // ...
}

Vault is just one way of managing secrets in development. For further reading on choosing a solution that’s right for you, see Google Cloud Platform’s documentation on Secret Management.

By Emily Ye, Software Engineer

Making Great Mobile Games with Firebase

Tuesday, August 15, 2017

So much goes into building and maintaining a mobile game. Let’s say you want to ship it with a level builder for sharing content with other players and, looking forward, you want to roll out new content and unlockables linked with player behavior. Of course, you also need players to be able to easily sign into your soon-to-be hit game.

With a DIY approach, you’d be faced with having to build user management, data storage, server side logic, and more. This will take a lot of your time, and importantly, it would take critical resources away from what you really want to do: build that amazing new mobile game!

Our Firebase SDKs for Unity and C++ provide you with the tools you need to add these features and more to your game with ease. Plus, to help you better understand how Firebase can help you build your next chart-topper, we’ve built a sample game in Unity and open sourced it: MechaHamster. Check it out on Google Play or download the project from GitHub to see how easy it is to integrate Firebase into your game.
Before you dive into the code for Mecha Hamster, here’s a rundown of the Firebase products that can help your game be successful.

Analytics

One of the best tools you have to maintain a high-performing game is your analytics. With Google Analytics for Firebase, you can see where your players might be struggling and make adjustments as needed. Analytics also integrates with Adwords and other major ad networks to maximize your campaign performance. If you monetize your game using AdMob, you can link your two accounts and see the lifetime value (LTV) of your players, from in-game purchases and AdMob, right from your Analytics console. And with Streamview, you can see how players are interacting with your game in realtime.

Test Lab for Android - Game Loop Test

Before releasing updates to your game, you’ll want to make sure it works correctly. However, manual testing can be time consuming when faced with a large variety of target devices. To help solve this, we recently launched Firebase Test Lab for Android Game Loop Test at Google I/O. If you add a demo mode to your game, Test Lab will automatically verify your game is working on a wide range of devices. You can read more in our deep dive blog post here.

Authentication

Another thing you’ll want to be sure to take care of before launch is easy sign-in, so your users can start playing as quickly as possible. Firebase Authentication can help by handling all sign-in and authentication, from simple email + password logins to support for common identity providers like Google, Facebook, Twitter, and Github. Just announced recently at I/O, Firebase also now supports phone number authentication. And Firebase Authentication shares state cross-device, so your users can pick up where they left off, no matter what platforms they’re using.

Remote Config

As more players start using your game, you realize that there are few spots that are frustrating for your audience. You may even see churn rates start to rise, so you decide that you need to push some adjustments. With Firebase Remote Config, you can change values in the console and push them out to players. Some players having trouble navigating levels? You can adjust the difficulty and update remotely. Remote Config can even benefit your development cycle; team members can tweak and test parameters without having to make new builds.

Realtime Database

Now that you have a robust player community, you’re probably starting to see a bunch of great player-built levels. With Firebase Realtime Database, you can store player data and sync it in real-time, meaning that the level builder you’ve built can store and share data easily with other players. You don't need your own server and it’s optimized for offline use. Plus, Realtime Database integrates with Firebase Auth for secure access to user specific data.

Cloud Messaging & Dynamic Links

A few months go by and your game is thriving, with high engagement and an active community. You’re ready to release your next wave of new content, but how can you efficiently get the word out to your users? Firebase Cloud Messaging lets you target messages to player segments, without any coding required. And Firebase Dynamic Links allow your users to share this new content — or an invitation to your game — with other players. Dynamic Links survive the app install process, so a new player can install your app and then dive right into the piece of content that was shared with him or her.

At Firebase, our mission is to help mobile developers build better apps and grow successful businesses. When it comes to games, that means taking care of the boring stuff, so you can focus on what matters — making a great game. Our mobile SDKs for C++ and Unity are available now at firebase.google.com/games.

By Darin Hilton, Art Director

Professors from Around the World Get Their Students into HFOSS

Friday, July 21, 2017

Over the last four years instructors from around the world have gathered for the Professors’ Open Source Software Experience (POSSE) workshop to integrate open source concepts into their curriculum. At each event, professors make more progress toward providing students with hands on experience via contributions to humanitarian free and open source software (HFOSS).

This year Google was proud to not only host a workshop at our San Francisco office in April, but also to collaborate with the organizers to bring a POSSE workshop to Europe for the first time.
POSSE workshop leaders, from left to right: Clif Kussmaul (Muhlenburg College), Lori Postner (Nassau Community College), Stoney Jackson (Western New England University),  Heidi Ellis (Western New England University), Greg Hislop (Drexel University), and Darci Burdge (Nassau Community College).
The workshop in Italy was led by Dr. Gregory Hislop from Drexel University, and Drs. Heidi Ellis and Stoney Jackson from Western New England University, and brought together 20 instructors from Germany, Hungary, India, Italy, Macedonia, Qatar, Spain, Swaziland, the United Kingdom, and the United States. This was the most geographically diverse workshop to date!
Group photos in San Francisco, USA on April 22, 2017 (left) and Bologna, Italy on July 1, 2017 (right).
What’s next for POSSE? University instructors from institutions in the US can apply now to participate in the next workshop, November 16-18 in Raleigh, NC and join their peers in the community of instructors weaving HFOSS into their curriculum.

By Helen Hu, Google Open Source

Facets: An Open Source Visualization Tool for Machine Learning Training Data

Monday, July 17, 2017

Cross-posted on the Google Research Blog

Getting the best results out of a machine learning (ML) model requires that you truly understand your data. However, ML datasets can contain hundreds of millions of data points, each consisting of hundreds (or even thousands) of features, making it nearly impossible to understand an entire dataset in an intuitive fashion. Visualization can help unlock nuances and insights in large datasets. A picture may be worth a thousand words, but an interactive visualization can be worth even more.

Working with the PAIR initiative, we’ve released Facets, an open source visualization tool to aid in understanding and analyzing ML datasets. Facets consists of two visualizations that allow users to see a holistic picture of their data at different granularities. Get a sense of the shape of each feature of the data using Facets Overview, or explore a set of individual observations using Facets Dive. These visualizations allow you to debug your data which, in machine learning, is as important as debugging your model. They can easily be used inside of Jupyter notebooks or embedded into webpages. In addition to the open source code, we've also created a Facets demo website. This website allows anyone to visualize their own datasets directly in the browser without the need for any software installation or setup, without the data ever leaving your computer.

Facets Overview

Facets Overview automatically gives users a quick understanding of the distribution of values across the features of their datasets. Multiple datasets, such as a training set and a test set, can be compared on the same visualization. Common data issues that can hamper machine learning are pushed to the forefront, such as: unexpected feature values, features with high percentages of missing values, features with unbalanced distributions, and feature distribution skew between datasets.
overview-numerical.png
Facets Overview visualization of the six numeric features of the UCI Census datasets[1]. The features are sorted by non-uniformity, with the feature with the most non-uniform distribution at the top. Numbers in red indicate possible trouble spots, in this case numeric features with a high percentage of values set to 0. The histograms at right allow you to compare the distributions between the training data (blue) and test data (orange).

overview-categorical-expand.png
Facets Overview visualization showing two of the nine categorical features of the UCI Census datasets[1]. The features are sorted by distribution distance, with the feature with the biggest skew between the training (blue) and test (orange) datasets at the top. Notice in the “Target” feature that the label values differ between the training and test datasets, due to a trailing period in the test set (“<=50K” vs “<=50K.”). This can be seen in the chart for the feature and also in the entries in the “top” column of the table. This label mismatch would cause a model trained and tested on this data to not be evaluated correctly.

Facets Dive

Facets Dive provides an easy-to-customize, intuitive interface for exploring the relationship between the data points across the different features of a dataset. With Facets Dive, you control the position, color and visual representation of each data point based on its feature values. If the data points have images associated with them, the images can be used as the visual representations.
facets-dive.gif
Facets Dive visualization showing all 16281 data points in the UCI Census test dataset[1]. The animation shows a user coloring the data points by one feature (“Relationship”), faceting in one dimension by a continuous feature (“Age”) and then faceting in another dimension by a discrete feature (“Marital Status”).
dive-quickdraw.png
Facets Dive visualization of a large number of face drawings from the “Quick, Draw!” Dataset, showing the relationship between the number of strokes and points in the drawings and the ability for the “Quick, Draw!” classifier to correctly categorize them as faces.

Fun Fact: In large datasets, such as the CIFAR-10 dataset[2], a small human labelling error can easily go unnoticed. We inspected the CIFAR-10 dataset with Dive and were able to catch a frog-cat – an image of a frog that had been incorrectly labelled as a cat!
cat-frogs.gif
Exploration of the CIFAR-10 dataset using Facets Dive. Here we facet the ground truth labels by row and the predicted labels by column. This produces a confusion matrix view, allowing us to drill into particular kinds of misclassifications. In this particular case, the ML model incorrectly labels some small percentage of true cats as frogs. The interesting thing we find by putting the real images in the confusion matrix is that one of these "true cats" that the model predicted was a frog is actually a frog from visual inspection. With Facets Dive, we can determine that this one misclassification wasn't a true misclassification of the model, but instead incorrectly labeled data in the dataset.
Screen Shot 2017-07-14 at 2.59.13 PM.png
Can you spot the frog-cat?
We’ve gotten great value out of Facets inside of Google and are excited to share the visualizations with the world. We hope they can help you discover new and interesting things about your data that lead you to create more powerful and accurate machine learning models. And since they are open source, you can customize the visualizations for your specific needs or contribute to the project to help us all better understand our data. If you have feedback about your experience with Facets, please let us know what you think.

By James Wexler, Senior Software Engineer, Google Big Picture Team

Acknowledgments

This work is a collaboration between Mahima Pushkarna, James Wexler and Jimbo Wilson, with input from the entire Big Picture team. We would also like to thank Justine Tunney for providing us with the build tooling.

References

[1] Lichman, M. (2013). UCI Machine Learning Repository [http://archive.ics.uci.edu/ml/datasets/Census+Income]. Irvine, CA: University of California, School of Information and Computer Science

[2] Learning Multiple Layers of Features from Tiny Images, Alex Krizhevsky (2009).

After a "close call," a coding champion

Thursday, July 13, 2017

Cross-posted on The Keyword

Eighteen-year-old Cameroon resident Nji Collins had just put the finishing touches on his final submission for the Google Code-In competition when his entire town lost internet access. It stayed dark for two months.

“That was a really, really close call,” Nji, who prefers to be called Collins, tells the Keyword, adding that he traveled to a neighboring town every day to check his email and the status of the contest. “It was stressful.”

Google’s annual Code-In contest, an effort to introduce teenagers to the world of open source, invites high school students from around the world to compete. It’s part of our mission to encourage and inspire the next generation of computer scientists, and in turn, the contest allows these young people to play a role in building real technologies.

Over the course of the competition, participants complete open-source coding and design “tasks” administered by an array of tech companies like Wikimedia and OpenMRS. Tasks range from editing webpages to updating databases to making videos; one of Collins’ favorites, for example, was making the OpenMRS home page sensitive to keystrokes. This year, more than 1,300 entrants from 62 countries completed nearly 6,400 assignments.

While Google sponsors and runs the contest, the participating tech organizations, who work most closely with the students, choose the winners. Those who finish the most tasks are named finalists, and the companies each select two winners from that group. Those winners are then flown to San Francisco, CA for an action-packed week involving talks at the Googleplex in Mountain View, office tours, segway journeys through the city, and a sunset cruise on the SF Bay.
Collins with some of the other winners from Google Code-in 2016
“It’s really fun to watch these kids come together and thrive,” says Stephanie Taylor, Code-In’s program manager. “Bringing together students from, say, Thailand and Poland because they have something in common: a shared love of computer science. Lifelong friendships are formed on these trips.”

Indeed, many Code-In winners say the community is their main motivator for joining the competition. “The people are what brought me here and keep me here,” says Sushain Cherivirala, a Carnegie Mellon computer science major and former Code-In winner who now serves as a program mentor. Mentors work with Code-In participants throughout the course of the competition to help them complete tasks and interface with the tech companies.
Google Code-in winners on the Google campus
Code-In also acts as an accessible introduction to computer science and the open source world. Mira Yang, a 17-year-old from New Jersey, learned how to code for the first time this year. She says she never would have even considered studying computer science further before she dabbled in a few Code-In tasks. Now, she plans to major in it.
Google Code-in winners Nji Collins and Mira Yang

“Code-In changed my view on computer sciences,” she says. “I was able to learn that I can do this. There’s definitely a stigma for girls in CS. But I found out that people will support you, and there’s a huge network out there.”

That network extended to Cameroon, where Collins’ patience and persistence paid off as he waited out his town’s internet blackout. One afternoon, while checking his email a few towns away, he discovered he’d been named a Code-In winner. He had been a finalist the year prior, when he was the only student from his school to compete. This year, he’d convinced a handful of classmates to join in.

“It wasn’t fun doing it alone; I like competition,” Collins, who learned how to code by doing his older sister’s computer science homework assignments alongside her, says. “It pushes me to work harder.”

Learn more about the annual Code-In competition.

By Carly Schwartz, Editor-in-Chief, Google Internal News

Supercharge your Computer Vision models with the TensorFlow Object Detection API

Thursday, June 15, 2017

Crossposted on the Google Research Blog

At Google, we develop flexible state-of-the-art machine learning (ML) systems for computer vision that not only can be used to improve our products and services, but also spur progress in the research community. Creating accurate ML models capable of localizing and identifying multiple objects in a single image remains a core challenge in the field, and we invest a significant amount of time training and experimenting with these systems.
Detected objects in a sample image (from the COCO dataset) made by one of our models.
Image credit: Michael Miley, original image
Last October, our in-house object detection system achieved new state-of-the-art results, and placed first in the COCO detection challenge. Since then, this system has generated results for a number of research publications1,2,3,4,5,6,7 and has been put to work in Google products such as NestCam, the similar items and style ideas feature in Image Search and street number and name detection in Street View.

Today we are happy to make this system available to the broader research community via the TensorFlow Object Detection API. This codebase is an open source framework built on top of TensorFlow that makes it easy to construct, train and deploy object detection models.  Our goals in designing this system was to support state-of-the-art models while allowing for rapid exploration and research.  Our first release contains the following:
The SSD models that use MobileNet are lightweight, so that they can be comfortably run in real time on mobile devices. Our winning COCO submission in 2016 used an ensemble of the Faster RCNN models, which are are more computationally intensive but significantly more accurate.  For more details on the performance of these models, see our CVPR 2017 paper.

Are you ready to get started?
We’ve certainly found this code to be useful for our computer vision needs, and we hope that you will as well.  Contributions to the codebase are welcome and please stay tuned for our own further updates to the framework. To get started, download the code here and try detecting objects in some of your own images using the Jupyter notebook, or training your own pet detector on Cloud ML engine!

By Jonathan Huang, Research Scientist and Vivek Rathod, Software Engineer

Acknowledgements
The release of the Tensorflow Object Detection API and the pre-trained model zoo has been the result of widespread collaboration among Google researchers with feedback and testing from product groups. In particular we want to highlight the contributions of the following individuals:

Core Contributors: Derek Chow, Chen Sun, Menglong Zhu, Matthew Tang, Anoop Korattikara, Alireza Fathi, Ian Fischer, Zbigniew Wojna, Yang Song, Sergio Guadarrama, Jasper Uijlings, Viacheslav Kovalevskyi, Kevin Murphy

Also special thanks to: Andrew Howard, Rahul Sukthankar, Vittorio Ferrari, Tom Duerig, Chuck Rosenberg, Hartwig Adam, Jing Jing Long, Victor Gomes, George Papandreou, Tyler Zhu

References
  1. Speed/accuracy trade-offs for modern convolutional object detectors, Huang et al., CVPR 2017 (paper describing this framework)
  2. Towards Accurate Multi-person Pose Estimation in the Wild, Papandreou et al., CVPR 2017
  3. YouTube-BoundingBoxes: A Large High-Precision Human-Annotated Data Set for Object Detection in Video, Real et al., CVPR 2017 (see also our blog post)
  4. Beyond Skip Connections: Top-Down Modulation for Object Detection, Shrivastava et al., arXiv preprint arXiv:1612.06851, 2016
  5. Spatially Adaptive Computation Time for Residual Networks, Figurnov et al., CVPR 2017
  6. AVA: A Video Dataset of Spatio-temporally Localized Atomic Visual Actions, Gu et al., arXiv preprint arXiv:1705.08421, 2017
  7. MobileNets: Efficient convolutional neural networks for mobile vision applications, Howard et al., arXiv preprint arXiv:1704.04861, 2017

MobileNets: Open Source Models for Efficient On-Device Vision

Wednesday, June 14, 2017

Crossposted on the Google Research Blog

Deep learning has fueled tremendous progress in the field of computer vision in recent years, with neural networks repeatedly pushing the frontier of visual recognition technology. While many of those technologies such as object, landmark, logo and text recognition are provided for internet-connected devices through the Cloud Vision API, we believe that the ever-increasing computational power of mobile devices can enable the delivery of these technologies into the hands of our users, anytime, anywhere, regardless of internet connection. However, visual recognition for on device and embedded applications poses many challenges — models must run quickly with high accuracy in a resource-constrained environment making use of limited computation, power and space.

Today we are pleased to announce the release of MobileNets, a family of mobile-first computer vision models for TensorFlow, designed to effectively maximize accuracy while being mindful of the restricted resources for an on-device or embedded application. MobileNets are small, low-latency, low-power models parameterized to meet the resource constraints of a variety of use cases. They can be built upon for classification, detection, embeddings and segmentation similar to how other popular large scale models, such as Inception, are used.
Example use cases include detection, fine-grain classification, attributes and geo-localization.
This release contains the model definition for MobileNets in TensorFlow using TF-Slim, as well as 16 pre-trained ImageNet classification checkpoints for use in mobile projects of all sizes. The models can be run efficiently on mobile devices with TensorFlow Mobile.
Model Checkpoint
Million MACs
Million Parameters
Top-1 Accuracy
Top-5 Accuracy
569
4.24
70.7
89.5
418
4.24
69.3
88.9
291
4.24
67.2
87.5
186
4.24
64.1
85.3
317
2.59
68.4
88.2
233
2.59
67.4
87.3
162
2.59
65.2
86.1
104
2.59
61.8
83.6
150
1.34
64.0
85.4
110
1.34
62.1
84.0
77
1.34
59.9
82.5
49
1.34
56.2
79.6
41
0.47
50.6
75.0
34
0.47
49.0
73.6
21
0.47
46.0
70.7
14
0.47
41.3
66.2
Choose the right MobileNet model to fit your latency and size budget. The size of the network in memory and on disk is proportional to the number of parameters. The latency and power usage of the network scales with the number of Multiply-Accumulates (MACs) which measures the number of fused Multiplication and Addition operations. Top-1 and Top-5 accuracies are measured on the ILSVRC dataset.
We are excited to share MobileNets with the open source community. Information for getting started can be found at the TensorFlow-Slim Image Classification Library. To learn how to run models on-device please go to TensorFlow Mobile. You can read more about the technical details of MobileNets in our paper, MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications.

By Andrew G. Howard, Senior Software Engineer and Menglong Zhu, Software Engineer

Acknowledgements
MobileNets were made possible with the hard work of many engineers and researchers throughout Google. Specifically we would like to thank:

Core Contributors: Andrew G. Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, Hartwig Adam

Special thanks to: Benoit Jacob, Skirmantas Kligys, George Papandreou, Liang-Chieh Chen, Derek Chow, Sergio Guadarrama, Jonathan Huang, Andre Hentz, Pete Warden

Google Summer of Code 2017 statistics part 2

Tuesday, June 6, 2017

Now that Google Summer of Code (GSoC) 2017 is under way with students in their first full week of the coding period we wanted to bring you some more statistics on the 2017 program. Lots and lots of numbers follow:

Organizations

Students are working with 201 organizations (the most we’ve ever had!) of which 39 are participating in GSoC for the first time.

Student Registrations

Over 20,651 students from 144 countries registered for the program, which is an 8.8% increase over the previous high for the program.

Project Proposals

4,764 students from 108 countries submitted a total of 7,089 project proposals.

Gender breakdown

11.4% of accepted students are women. We are always interested in making our programs and open source more inclusive. Please contact us if you know of organizations we should work with to spread the word about GSoC to underrepresented groups.

Universities

The 1,318 students accepted into the GSoC 2017 program hailed from 575 universities, of which 142 have students participating for the first time in GSoC.

Top 10 schools by students accepted for GSoC 2017 

University Name Country Accepted Students
International Institute of Information Technology, Hyderabad India 39
Birla Institute of Technology and Science, Pilani (BITS Pilani) India 37
Indian Institute of Technology, Kharagpur India 31
University of Moratuwa Sri Lanka 24
Delhi Technological University India 23
Birla Institute of Technology and Science Pilani, Goa Campus India 18
Indian Institute of Technology, Roorkee India 18
Indian Institute of Technology, Bombay India 15
LNM Institute of Information Technology India 15
TU Munich/Technische Universität München Germany 14

Another post with stats on our GSoC mentors will be coming soon!

Stephanie Taylor, Google Open Source

Google Summer of Code 2017 statistics part 1

Thursday, May 25, 2017

Since 2005 Google Summer of Code (GSoC) has been bringing new developers into the open source community every year. GSoC 2017 is the largest to date with 1,318 students from 72 countries accepted into the program who are working with a record 201 open source organizations this summer.

Students are currently participating in the Community Bonding phase of the program where they become familiar with the open source communities they will be working with. They also spend time learning the codebase and the community’s best practices so they can start their 12 week coding projects on May 30th.

Each year we like to share program statistics as we see GSoC continue to expand all over the world. This year there are three students that are the first to be accepted into GSoC from their home countries: Qatar, Tajikistan and Zimbabwe. A complete list of accepted students and their countries is below:

Country Students Country Students Country Students
Argentina 3 Ghana 1 Qatar 1
Armenia 1 Greece 29 Romania 11
Australia 6 Hungary 6 Russian Federation 54
Austria 13 India 569 Saudi Arabia 1
Bangladesh 2 Indonesia 2 Serbia 3
Belarus 3 Ireland 5 Singapore 10
Belgium 6 Israel 2 Slovak Republic 6
Bosnia and Herzegovina 1 Italy 23 Slovenia 2
Brazil 21 Jamaica 1 South Africa 2
Bulgaria 4 Japan 13 South Korea 8
Cameroon 8 Kazakhstan 1 Spain 19
Canada 27 Kenya 1 Sri Lanka 54
China 49 Latvia 1 Sweden 8
Colombia 1 Lithuania 2 Switzerland 5
Costa Rica 1 Macedonia 1 Taiwan 1
Croatia 1 Mexico 1 Tajikistan 1
Czech Republic 6 Moldova 1 Turkey 11
Denmark 2 Netherlands 14 Ukraine 12
Ecuador 2 New Zealand 1 United Arab Emirates 1
Egypt 10 Nigeria 1 United Kingdom 16
Estonia 1 Pakistan 8 United States 126
Finland 4 Peru 1 Uruguay 1
France 20 Poland 19 Vietnam 4
Germany 55 Portugal 10 Zimbabwe 1

In our next GSoC statistics post we will delve deeper into the schools, gender breakdown, mentors and registration numbers for the 2017 program.

Stephanie Taylor, Google Open Source

Open sourcing the Firebase SDKs

Wednesday, May 17, 2017

Today, at Google I/O 2017, we are pleased to announce that we are taking our first steps towards open sourcing our client libraries. By making our SDKs open, we’re aiming to show our commitment to greater transparency and to building a stronger developer community. To help further that goal, we’ll be using GitHub as a core part of our own toolchain to enable all of you to contribute as well. As you find issues in our code, from inconsistent style to bugs, you can file issues through the standard GitHub issue tracker. You can also find our project in the Google Open Source directory. We’re really looking forward to your pull requests!

What’s open?

We’re starting by open sourcing several products in our iOS, JavaScript, Java, Node.js and Python SDKs. We'll be looking at open sourcing our Android SDK as well. The SDKs are being licensed under Apache 2.0, the same flexible license as existing Firebase open source projects like FirebaseUI.

Let's take a look at each repo:

Firebase iOS SDK 4.0

https://github.com/firebase/firebase-ios-sdk

With the launch of the Firebase iOS 4.0 SDKs we have made several improvements to the developer experience, such as more idiomatic API names for our Swift users. By open sourcing our iOS SDKs we hope to provide an additional avenue for you to give us feedback on such features. For this first release we are open sourcing our Realtime Database, Auth, Cloud Storage and Cloud Messaging (FCM) SDKs, but going forward we intend to release more.

Because we aren't yet able to open source some of the Firebase components, the full product build process isn't available. While you can use this repo to build a FirebaseDev pod, our libraries distributed through CocoaPods will continue to be static frameworks for the time being. We are continually looking for ways to improve the developer experience for developers, however you integrate.

Our GitHub README provides more details on how you build, test and contribute to our iOS SDKs.

Firebase JavaScript SDK 4.0

https://github.com/firebase/firebase-js-sdk

We are excited to announce that we are open sourcing our Realtime Database, Cloud Storage and Cloud Messaging (FCM) SDKs for JavaScript. We’ll have a couple of improvements hot on the heels of this initial release, including open sourcing Firebase Authentication. We are also in the process of releasing the source maps for our components, which we expect would really improve the debuggability of your app.

Our GitHub repo includes instructions on how you can build, test and contribute.

Firebase Admin SDKs

Node.js: https://github.com/firebase/firebase-admin-node
Java: https://github.com/firebase/firebase-admin-java
Python: https://github.com/firebase/firebase-admin-python

We are happy to announce that all three of our Admin SDKs for accessing Firebase on privileged environments are now fully open source, including our recently-launched Python SDK. While we continue to explore supporting more languages, we encourage you to use our source as inspiration to enable Firebase for your environment (and if you do, we'd love to hear about it!)

We're really excited to see what you do with the updated SDKs - as always reach out to us with feedback or questions in the Firebase-Talk Google Group, on Stack Overflow, via the Firebase Support team, and now on GitHub for SDK issues and pull requests! And to read about the other improvements to Firebase that launched at Google I/O, head over to the Firebase blog.

By Salman Qadri, Firebase Product Manager

Open Source at Google I/O 2017

Tuesday, May 16, 2017

One of the best parts of Google I/O every year is the chance to meet with the developers and community organizers from all over the world. It's a unique opportunity to have candid one-on-one conversations about the products and technologies we all love.

This year, I/O features a Community Lounge for attendees to relax, hangout, and play with neat experiments and games. It also features several mini-meetups during which you can chat with Googlers on a variety of topics.

Chris DiBona and Will Norris from the Google Open Source Programs Office will be around Thursday and Friday to talk about anything and everything open source, including our student outreach programs and the new Google Open Source website. If you're at Google I/O this year, make sure to drop by and say hello. Find dates, times, and other details in the Community Lounge schedule.

By Josh Simmons, Google Open Source

OSS-Fuzz: Five months later, and rewarding projects

Monday, May 8, 2017

Five months ago, we announced OSS-Fuzz, Google’s effort to help make open source software more secure and stable. Since then, our robot army has been working hard at fuzzing, processing 10 trillion test inputs a day. Thanks to the efforts of the open source community who have integrated a total of 47 projects, we’ve found over 1,000 bugs (264 of which are potential security vulnerabilities).

Breakdown of the types of bugs we're finding.

Notable results

OSS-Fuzz has found numerous security vulnerabilities in several critical open source projects: 10 in FreeType2, 17 in FFmpeg, 33 in LibreOffice, 8 in SQLite 3, 10 in GnuTLS, 25 in PCRE2, 9 in gRPC, and 7 in Wireshark, etc. We’ve also had at least one bug collision with another independent security researcher (CVE-2017-2801). (Some of the bugs are still view restricted so links may show smaller numbers.)

Once a project is integrated into OSS-Fuzz, the continuous and automated nature of OSS-Fuzz means that we often catch these issues just hours after the regression is introduced into the upstream repository, before any users are affected.

Fuzzing not only finds memory safety related bugs, it can also find correctness or logic bugs. One example is a carry propagating bug in OpenSSL (CVE-2017-3732).

Finally, OSS-Fuzz has reported over 300 timeout and out-of-memory failures (~75% of which got fixed). Not every project treats these as bugs, but fixing them enables OSS-Fuzz to find more interesting bugs.

Announcing rewards for open source projects

We believe that user and internet security as a whole can benefit greatly if more open source projects include fuzzing in their development process. To this end, we’d like to encourage more projects to participate and adopt the ideal integration guidelines that we’ve established.

Combined with fixing all the issues that are found, this is often a significant amount of work for developers who may be working on an open source project in their spare time. To support these projects, we are expanding our existing Patch Rewards program to include rewards for the integration of fuzz targets into OSS-Fuzz.

To qualify for these rewards, a project needs to have a large user base and/or be critical to global IT infrastructure. Eligible projects will receive $1,000 for initial integration, and up to $20,000 for ideal integration (the final amount is at our discretion). You have the option of donating these rewards to charity instead, and Google will double the amount.

To qualify for the ideal integration reward, projects must show that:
  • Fuzz targets are checked into their upstream repository and integrated in the build system with sanitizer support (up to $5,000).
  • Fuzz targets are efficient and provide good code coverage (>80%) (up to $5,000). 
  • Fuzz targets are part of the official upstream development and regression testing process, i.e. they are maintained, run against old known crashers and the periodically updated corpora (up to $5,000).
  • The last $5,000 is a “l33t” bonus that we may reward at our discretion for projects that we feel have gone the extra mile or done something really awesome.
We’ve already started to contact the first round of projects that are eligible for the initial reward. If you are the maintainer or point of contact for one of these projects, you may also reach out to us in order to apply for our ideal integration rewards.

The future

We’d like to thank the existing contributors who integrated their projects and fixed countless bugs. We hope to see more projects integrated into OSS-Fuzz, and greater adoption of fuzzing as standard practice when developing software.

By Oliver Chang, Abhishek Arya (Security Engineers, Chrome Security), Kostya Serebryany (Software Engineer, Dynamic Tools), and Josh Armour (Security Program Manager)

Students, Start Your Engineerings!

Thursday, May 4, 2017


It’s that time again! Our 201 mentoring organizations have selected 1,318 the students they look forward to working with during the 13th Google Summer of Code (GSoC). Congratulations to our 2017 students and a big thank you to everyone who applied!

The next step for participating students is the Community Bonding period which runs from May 4th through May 30th. During this time, students will get up to speed on the culture and toolset of their new community. They’ll also get acquainted with their mentor and learn more about the languages or tools they will need to complete their projects. Coding commences May 30th.

To the more than 4,200 students who were not chosen this year - don’t be discouraged! Many students apply at least once to GSoC before being accepted. You can improve your odds for next time by contributing to the open source project of your choice directly; organizations are always eager for new contributors! Look around GitHub and elsewhere on the internet for a project that interests you and get started.

Happy coding, everyone!

By Cat Allman, Google Open Source

Saddle up and meet us in Texas for OSCON 2017

Wednesday, April 26, 2017

The Google Open Source team is getting ready to hit the road and join the open source panoply that is Open Source Convention (OSCON). This year the event runs May 8-11 in Austin, Texas and is preceded on May 6-7 by the free-to-attend Community Leadership Summit (CLS).
Program chairs at OSCON 2016, left to right:
Kelsey Hightower, Scott Hanselman, Rachel Roumeliotis.
Photo used with permission from O'Reilly Media.

You’ll find our team and many other Googlers throughout the week on the program schedule and in the expo hall at booth #401. We’ve got a full rundown of our schedule below, but you can swing by the expo hall anytime to discuss Google Cloud Platform, our open source outreach programs, the projects we’ve open-sourced including Kubernetes, TensorFlow, gRPC, and even our recently released open source documentation.

Of course, you’ll also find our very own Kelsey Hightower everywhere since he is serving as one of three OSCON program chairs for the second year in a row.

Are you a student, educator, project maintainer, community leader, past or present participant in Google Summer of Code or Google Code-in? Join us for lunch at the Google Summer of Code table in the conference lunch area on Wednesday afternoon. We’ll discuss our outreach programs which help open source communities grow while providing students with real world software development experience. We’ll be updating this blog post and tweeting with details closer to the date.

Without further ado, here’s our schedule of events:

Monday, May 8th (Tutorials)

Tuesday, May 9th (Tutorials)

Wednesday, May 10th (Sessions)
12:30pm Google Summer of Code and Google Code-in lunch

Thursday, May 11th (Sessions)

We look forward to seeing you deep in the heart of Texas at OSCON 2017!

By Josh Simmons, Google Open Source

Introducing tf-seq2seq: An Open Source Sequence-to-Sequence Framework in TensorFlow

Tuesday, April 11, 2017

Crossposted on the Google Research Blog

Last year, we announced Google Neural Machine Translation (GNMT), a sequence-to-sequence (“seq2seq”) model which is now used in Google Translate production systems. While GNMT achieved huge improvements in translation quality, its impact was limited by the fact that the framework for training these models was unavailable to external researchers.

Today, we are excited to introduce tf-seq2seq, an open source seq2seq framework in TensorFlow that makes it easy to experiment with seq2seq models and achieve state-of-the-art results. To that end, we made the tf-seq2seq codebase clean and modular, maintaining full test coverage and documenting all of its functionality.

Our framework supports various configurations of the standard seq2seq model, such as depth of the encoder/decoder, attention mechanism, RNN cell type, or beam size. This versatility allowed us to discover optimal hyperparameters and outperform other frameworks, as described in our paper, “Massive Exploration of Neural Machine Translation Architectures.”

A seq2seq model translating from Mandarin to English. At each time step, the encoder takes in one Chinese character and its own previous state (black arrow), and produces an output vector (blue arrow). The decoder then generates an English translation word-by-word, at each time step taking in the last word, the previous state, and a weighted combination of all the outputs of the encoder (aka attention [3], depicted in blue) and then producing the next English word. Please note that in our implementation we use wordpieces [4] to handle rare words.
In addition to machine translation, tf-seq2seq can also be applied to any other sequence-to-sequence task (i.e. learning to produce an output sequence given an input sequence), including machine summarization, image captioning, speech recognition, and conversational modeling. We carefully designed our framework to maintain this level of generality and provide tutorials, preprocessed data, and other utilities for machine translation.

We hope that you will use tf-seq2seq to accelerate (or kick off) your own deep learning research. We also welcome your contributions to our GitHub repository, where we have a variety of open issues that we would love to have your help with!

Acknowledgments:
We’d like to thank Eugene Brevdo, Melody Guan, Lukasz Kaiser, Quoc V. Le, Thang Luong, and Chris Olah for all their help. For a deeper dive into how seq2seq models work, please see the resources below.

References:
[1] Massive Exploration of Neural Machine Translation Architectures, Denny Britz, Anna Goldie, Minh-Thang Luong, Quoc Le
[2] Sequence to Sequence Learning with Neural Networks, Ilya Sutskever, Oriol Vinyals, Quoc V. Le. NIPS, 2014
[3] Neural Machine Translation by Jointly Learning to Align and Translate, Dzmitry Bahdanau, Kyunghyun Cho, Yoshua Bengio. ICLR, 2015
[4] Google’s Neural Machine Translation System: Bridging the Gap between Human and Machine Translation, Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin Johnson, Xiaobing Liu, Łukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason Smith, Jason Riesa, Alex Rudnick, Oriol Vinyals, Greg Corrado, Macduff Hughes, Jeffrey Dean. Technical Report, 2016
[5] Attention and Augmented Recurrent Neural Networks, Chris Olah, Shan Carter. Distill, 2016
[6] Neural Machine Translation and Sequence-to-sequence Models: A Tutorial, Graham Neubig
[7] Sequence-to-Sequence Models, TensorFlow.org

By Anna Goldie and Denny Britz, Research Software Engineer and Google Brain Resident, Google Brain Team

Join the first POSSE Workshop in Europe

Monday, April 10, 2017

We are excited to announce that the Professors’ Open Source Software Experience (POSSE) is expanding to Europe! POSSE is an event that brings together educators interested in providing students with experience in real-world projects through participation in humanitarian free and open source software (HFOSS) projects.

Over 100 faculty members have attended past workshops and there is a growing community of instructors teaching students through contributions to HFOSS. This three-stage faculty workshop will prepare you to support student participation in open source projects. During the workshop, you will:

  • Learn how to support student learning within real-world project environments
  • Motivate students and cultivate their appreciation of computing for social good
  • Collaborate with instructors who have similar interests and goals
  • Join a community of educators passionate about HFOSS

Workshop Format

Stage 1: Starts May 8, 2017 with online activities. Activities will take 2-3 hours per week and include interaction with workshop instructors and participants.
Stage 2: The face-to-face workshop will be held in Bologna, Italy, July 1-2, 2017 and is a pre-event for the ACM ITiCSE conference. Workshop participants include the workshop organizers, POSSE alumni, and members of the open source community.
Stage 3: Online activities and interactions in small groups immediately following the face-to-face workshop. Participants will have support while involving students in an HFOSS project in the classroom.

How to Apply

If you’re a full-time instructor at an academic institution outside of the United States, you can join the workshop being held in Bologna, Italy, July 1-2, 2017. Please complete and submit the application by May 1, 2017. Prior work with FOSS projects is not required. English is the official language of the workshop. The POSSE workshop committee will send an email notifying you of the status of your application by May 5, 2017.

Participant Support

The POSSE workshop in Europe is supported by Google. Attendees will be provided with funding for two nights lodging ($225 USD per night) and meals during the workshop. Travel costs will also be covered up to $450 USD. Participants are responsible for any charges above these limits. At this time, we can only support instructors at institutions of higher education outside of the U.S. For faculty at U.S. institutions, the next POSSE will be in fall 2017 on the east coast of the U.S.

We look forward to seeing you at the POSSE workshop in Italy!

By Helen Hu, Open Source Programs Office

Noto Serif CJK is here!

Thursday, April 6, 2017

Crossposted from the Google Developers Blog

Today, in collaboration with Adobe, we are responding to the call for Serif! We are pleased to announce Noto Serif CJK, the long-awaited companion to Noto Sans CJK released in 2014. Like Noto Sans CJK, Noto Serif CJK supports Simplified Chinese, Traditional Chinese, Japanese, and Korean, all in one font.

A serif-style CJK font goes by many names: Song (宋体) in Mainland China, Ming (明體) in Hong Kong, Macao and Taiwan, Minchō (明朝) in Japan, and Myeongjo (명조) or Batang (바탕) in Korea. The names and writing styles originated during the Song and Ming dynasties in China, when China's wood-block printing technique became popular. Characters were carved along the grain of the wood block. Horizontal strokes were easy to carve and vertical strokes were difficult; this resulted in thinner horizontal strokes and wider vertical ones. In addition, subtle triangular ornaments were added to the end of horizontal strokes to simulate Chinese Kai (楷体) calligraphy. This style continues today and has become a popular typeface style.

Serif fonts, which are considered more traditional with calligraphic aesthetics, are often used for long paragraphs of text such as body text of web pages or ebooks. Sans-serif fonts are often used for user interfaces of websites/apps and headings because of their simplicity and modern feeling.

Design of '永' ('eternity') in Noto Serif and Sans CJK. This ideograph is famous for having the most important elements of calligraphic strokes. It is often used to evaluate calligraphy or typeface design.

The Noto Serif CJK package offers the same features as Noto Sans CJK:

  • It has comprehensive character coverage for the four languages. This includes the full coverage of CJK Ideographs with variation support for four regions, Kangxi radicals, Japanese Kana, Korean Hangul and other CJK symbols and letters in the Unicode Basic Multilingual Plane of Unicode. It also provides a limited coverage of CJK Ideographs in Plane 2 of Unicode, as necessary to support standards from China and Japan.


Simplified Chinese
Supports GB 18030 and China’s latest standard Table of General Chinese Characters (通用规范汉字表) published in 2013.
Traditional Chinese
Supports BIG5, and Traditional Chinese glyphs are compliant to glyph standard of Taiwan Ministry of Education (教育部國字標準字體).
Japanese
Supports all of the kanji in  JIS X 0208, JIS X 0213, and JIS X 0212 to include all kanji in Adobe-Japan1-6.
Korean
The best font for typesetting classic Korean documents in Hangul and Hanja such as Humninjeongeum manuscript, a UNESCO World Heritage.
Supports over 1.5 million archaic Hangul syllables and 11,172 modern syllables as well as all CJK ideographs in KS X 1001 and KS X 1002
Noto Serif CJK’s support of character and glyph set standards for the four languages
  • It respects diversity of regional writing conventions for the same character. The example below shows the four glyphs of '述' (describe) in four languages that have subtle differences.
From left to right are glyphs of '述' in S. Chinese, T. Chinese, Japanese and Korean. This character means "describe".
  • It is offered in seven weights: ExtraLight, Light, Regular, Medium, SemiBold, Bold, and Black. Noto Serif CJK supports 43,027 encoded characters and includes 65,535 glyphs (the maximum number of glyphs that can be included in a single font). The seven weights, when put together, have almost a half-million glyphs. The weights are compatible with Google's Material Design standard fonts, Roboto, Noto Sans and Noto Serif (Latin-Greek-Cyrillic fonts in the Noto family).
Seven weights of Noto Serif CJK
    • It supports vertical text layout and is compliant with the Unicode vertical text layout standard. The shape, orientation, and position of particular characters (e.g., brackets and kana letters) are changed when the writing direction of the text is vertical.



    The sheer size of this project also required regional expertise! Glyph design would not have been possible without leading East Asian type foundries Changzhou SinoType Technology, Iwata Corporation, and Sandoll Communications.

    Noto Serif CJK is open source under the SIL Open Font License, Version 1.1. We invite individual users to install and use these fonts in their favorite authoring apps; developers to bundle these fonts with your apps, and OEMs to embed them into their devices. The fonts are free for everyone to use!

    Noto Serif CJK font download: https://www.google.com/get/noto
    Noto Serif CJK on GitHub: https://github.com/googlei18n/noto-cjk
    Adobe's landing page for this release: http://adobe.ly/SourceHanSerif
    Source Han Serif on GitHub: https://github.com/adobe-fonts/source-han-serif/tree/release/

    By Xiangye Xiao and Jungshik Shin, Internationalization Engineering team
    .