opensource.google.com

Menu

Posts from December 2022

Season of Docs 2022 program results

Wednesday, December 14, 2022

Season of Docs is a Google program that provides support for open source projects to improve their documentation and gives professional technical writers an opportunity to gain experience in open source. We’re delighted to announce the 2022 program results!

From April 14 to November 14, 2022 selected open source organizations worked with their chosen technical writer to complete their documentation project.

  • 30 open source organizations finished their projects
  • 93% of organizations had a positive experience
  • 90% of organizations felt their documentation project was successful

Take a look at the list of completed projects to see the wide range of subjects covered!

We’d also like to share that the 2021 case study report has been published on the website. The results are based on the three post-program followup surveys sent to the organizations to determine whether or not their initial metrics had been met. A few highlights from the report include:

  • A diverse range of open source projects participated in the 2021 program: languages, Python ecosystem projects, education, climate, machine learning, fintech, robotics, developer tools, documentation tools.
  • Most projects focused on creating documentation to reduce maintainer burden through reducing issues and questions, and/ or increasing project participation either by project users or contributors.
  • 18 projects reported they were still working with their technical writer (four technical writers are participating in a paid role).

Looking forward to Season of Docs 2023? Stay tuned and watch for posts on the Google Open Source blog and sign up for the announcements email list. For organizations and technical writers interested in applying for next year’s program, check out the guides, the FAQ, and the accepted project proposals from 2022 and previous seasons.

If you were excited about participating, please do write social media posts. See the promotion and press page for images and other promotional materials you can include, and be sure to use the tag #SeasonOfDocs when promoting your project on social media. To include the tech writing and open source communities, add #WriteTheDocs, #techcomm, #TechnicalWriting, and #OpenSource to your posts.

By Romina Vicente and Erin McKean – Google Open Source Programs Office

Google Announces OpenChain ISO/IEC 5230:2020 Conformant Program

Wednesday, December 7, 2022

Google is proud to be an OpenChain Governing Board member. As an early adopter of the first generation of OpenChain, we are announcing formal adoption of ISO/IEC 5230, the International Standard for open source license compliance.

The OpenChain Project maintains ISO/IEC 5230, the International Standard for open source compliance. This allows companies of all sizes and in all sectors to adopt the key requirements of a quality open source compliance program. This is an open standard and all parties are welcome to engage with the OpenChain community, to share their knowledge, and to contribute to the future of the standard.

Google has been at the forefront of open source development and the compliant use of open source from its earliest days. The Google Open Source Programs Office prides itself on bringing the best of open source to Google and the best of Google to open source. Responsible use of open source includes respecting developers through compliant use of their code. Google’s participation in the OpenChain project is an important part of supporting industry maturity and predictability in open source compliance.

Google previously announced its conformance with OpenChain 1.2 and collaborated with the OpenChain Project on the creation of both open source compliance program standards.

By Hilary Richardson and Sonal Bhoraniya – Google Open Source Programs Office Licensing & Compliance Advisors

GSoC 2022: It’s a wrap!

Tuesday, December 6, 2022

We just wrapped up the final projects for Google Summer of Code 2022 and want to share some highlights from this year’s program. We are pleased to announce that a total of 1,054 GSoC contributors successfully completed the 2022 cycle.

2022 saw some considerable changes to the Google Summer of Code program. Let’s start with some stats around those three major changes:

    • The standard 12-week project length was used by 71.2% of contributors while 19.21% spent between 13–18 weeks on their project, while 9.33% of GSoC contributors took advantage of the 19–22 week project lengths. It is clear from feedback written by mentors and contributors alike the option for extended project lengths was a hit with participants.
    • GSoC 2022 allowed both medium-size (~175 hours) and large-size (~350 hours) projects. For 2022, 47% of the contributor projects were medium while 53% were large projects.
    • This year the program was also open to more than students for the first time and 10.4% of the accepted GSoC contributors were non-students.

In the final weeks of the program we asked contributors and mentors questions about their experiences with the program this year. Here are some of the key takeaways from the participants:

Favorite part of GSoC 2022

There were a few themes that rose to the top when contributors were asked what their favorite part of the program was:

  1. Getting involved in their organization’s open source community with folks from all around the world and their amazing mentors.
  2. Learning new skills (programming languages, skills, new technologies) and learning more about open source communities.
  3. Contributing to a meaningful community and project.
  4. Learning from experienced and thoughtful developers (their mentors and their whole community).

Improved programming skills

96% of contributors think that GSoC helped their programming skills. The most common responses to how GSoC improved their skills were:

  • Improving the quality of their code through feedback from mentors, collaboration and learning more about the importance of code reviews.
  • Gaining confidence in their coding skills and knowledge about best practices. Learning how to write more efficient code and to meet the org’s coding standards.
  • Ability to read and understand real complex codebases, and learning how to implement code with other developer’s code.

Most challenging parts of GSoC

And the most common struggles included:
  • Managing their time effectively with many other commitments.
  • Initial days starting with the organization, understanding the codebase, and sometimes learning a new programming language along the way.
  • Communicating with mentors and community members in different time zones and collaborating remotely.

Additional fun stats from GSoC Contributors

  • 99% of GSoC contributors would recommend their GSoC mentors
  • 98% of GSoC contributors plan to continue working with their GSoC organization
  • 99% of GSoC contributors plan to continue working on open source
  • 35% of GSoC contributors said GSoC has already helped them get a job or internship
  • 84% of GSoC contributors said they would consider being a mentor
  • 95% of GSoC contributors said they would apply to GSoC again

We know that’s a lot of numbers to read through, but folks ask us for more information and feedback on GSoC each year. Our hope is that we succeeded in providing additional details for this 2022 program. Every mentor and GSoC contributor took the time to fill in their evaluations and give us great written feedback on how the program affected them so we wanted to highlight this.

As we look forward to Google Summer of Code 2023, we want to thank all of our mentors, organization administrators, and contributors for a successful and smooth GSoC 2022. Thank you all for the time and energy you put in to make open source communities stronger and healthier.

Remember GSoC 2023 will be open for organization applications from January 23–February 7, 2023. We will announce the 2023 accepted GSoC organizations February 22 on the program site: g.co/gsoc. GSoC contributor applications will be open March 20–April 4, 2023.

By Stephanie Taylor, Program Manager – Google Open Source

Open sourcing the attention center model

Thursday, December 1, 2022

When you look at an image, what parts of an image do you pay attention to first? Would a machine be able to learn this? We provide a machine learning model that can be used to do just that. Why is it useful? The latest generation image format (JPEG XL) supports serving the parts that you pay attention to first, which results in an improved user experience: images will appear to load faster. But the model not only works for encoding JPEG XL images, but can be used whenever we need to know where a human would look first.

An open sourcing attention center model

What regions in an image will attract the majority of human visual attention first? We trained a model to predict such a region when given an image, called the attention center model, which is now open sourced. In addition to the model, we provide a script to use it in combination with the JPEG XL encoder: google/attention-center.

Some example predictions of our attention center model are shown in the following figure, where the green dot is the predicted attention center point for the image. Note that in the “two parrots” image both parrots’ heads are visually important, so the attention center point will be in the middle.

Four images in quadrants as follows: A red door with brass doorknob in top left quadrant, headshot of a brown skinned girl waering a colorful sweater and ribbons in her hair and painted face smiling at the camera in the top right quadrant, A teal shuttered catherdral style window against a sand colored stucco wall with pink and red hibiscus in the forefront in the bottom left quadrant, A blue and yellow macaw and red and green macaw next to each other in the bottom right quadrant
Images are from Kodak image data set: http://r0k.us/graphics/kodak/

The model is 2MB and in the TensorFlow Lite format. It takes an RGB image as input and outputs a 2D point, which is the predicted center of human attention on the image. That predicted center is the place where we should start with operations (decoding and displaying in JPEG XL case). This allows the most visually salient/import regions to be processed as early as possible. Check out the code and continue to build upon it!

Attention center ground-truth data

To train a model to predict the attention center, we first need to have some ground-truth data from the attention center. Given an image, some attention points can either be collected by eye trackers [1], or be approximated by mouse clicks on a blurry version of the image [2]. We first apply temporal filtering to those attention points and keep only the initial ones, and then apply spatial filtering to remove noise (e.g., random gazes). We then compute the center of the remaining attention points as the attention center ground-truth. An example illustration figure is shown below for the process of obtaining the ground-truth.

Five images in a row showing the original image of a person standing on a rock by the ocean; the first is the original image, the second showing gaze/attention points, the third shoing temporal filtering, the fourth spatial filtering, and fifth, attention center

Attention center model architecture

The attention center model is a deep neural net, which takes an image as input, and uses a pre-trained classification network, e.g, ResNet, MobileNet, etc., as the backbone. Several intermediate layers that output from the backbone network are used as input for the attention center prediction module. These different intermediate layers contain different information e.g., shallow layers often contain low level information like intensity/color/texture, while deeper layers usually contain higher and more semantic information like shape/object. All are useful for the attention prediction. The attention center prediction applies convolution, deconvolution and/or resizing operator together with aggregation and sigmoid function to generate a weighting map for the attention center. And then an operator (the Einstein summation operator in our case) can be applied to compute the (gravity) center from the weighting map. An L2 norm between the predicted attention center and the ground-truth attention center can be computed as the training loss.

Attention center model architecture

Progressive JPEG XL images with attention center model

JPEG XL is a new image format that allows the user to encode images in a way to ensure the more interesting parts come first. This has the advantage that when viewing images that are transferred over the web, we can already display the attention grabbing part of the image, i.e. the parts where the user looks first and as soon as the user looks elsewhere ideally the rest of the image already has arrived and has been decoded. Using Saliency in progressive JPEG XL images | Google Open Source Blog illustrates how this works in principle. In short, in JPEG XL, the image is divided into square groups (typically of size 256 x 256), and the JPEG XL encoder will choose a starting group in the image and then grow concentric squares around that group. It was this need for figuring out where the attention center of an image is that led us to open source the attention center model, together with a script to use it in combination with the JPEG XL encoder. Progressive decoding of JPEG XL images has recently been added to Chrome starting from version 107. At the moment, JPEG XL is behind an experimental flag, which can be enabled by going to chrome://flags, searching for “jxl”.

To try out how partially loaded progressive JPEG XL images look, you can go to https://google.github.io/attention-center/.

By Moritz Firsching, Junfeng He, and Zoltan Szabadka – Google Research

References

[1] Valliappan, Nachiappan, Na Dai, Ethan Steinberg, Junfeng He, Kantwon Rogers, Venky Ramachandran, Pingmei Xu et al. "Accelerating eye movement research via accurate and affordable smartphone eye tracking." Nature communications 11, no. 1 (2020): 1-12.

[2] Jiang, Ming, Shengsheng Huang, Juanyong Duan, and Qi Zhao. "Salicon: Saliency in context." In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1072-1080. 2015.

.