terça-feira, 26 de junho de 2012
Introducing new Fusion Tables API
We are very pleased to announce the public availability of the new Fusion Tables API. The new API includes all of the functionality of the existing SQL API, plus the ability to read and modify table and column metadata as well as the definitions of styles and templates for data visualization. This API is also integrated with the Google APIs console which lets developers manage all their Google APIs in one place and take advantage of built-in reporting and authentication features.
With this launch, we are also announcing a six month deprecation period for the existing SQL API. Since the new API includes all of the functionality of the existing SQL API, developers can easily migrate their applications using our migration guide.
For a detailed description of the features in the new API, please refer to the API documentation.
Become a Google Power Searcher
Posted by Terry Ednacot, Education Program Manager
Cross-posted with the Official Google Blog
You may already be familiar with some shortcuts for Google Search, like using the search box as a calculator or finding local movie showtimes by typing [movies] and your zip code. But there are many more tips, tricks and tactics you can use to find exactly what you’re looking for, when you most need it.
Today, we’ve opened registration for Power Searching with Google, a free, online, community-based course showcasing these techniques and how you can use them to solve everyday problems. Our course is aimed at empowering you to find what you need faster, no matter how you currently use search. For example, did you know that you can search for and read pages written in languages you’ve never even studied? Identify the location of a picture your friend took during his vacation a few months ago? How about finally identifying that green-covered book about gardening that you’ve been trying to track down for years? You can learn all this and more over six 50-minute classes.
Lessons will be released daily starting on July 10, 2012, and you can take them according to your own schedule during a two-week window, alongside a worldwide community. The lessons include interactive activities to practice new skills, and many opportunities to connect with others using Google tools such as Google Groups, Moderator and Google+, including Hangouts on Air, where world-renowned search experts will answer your questions on how search works. Googlers will also be on hand during the course period to help and answer your questions in case you get stuck.
Power Searching with Google blends the MOOC (Massive Open Online Course) learning format pioneered by Stanford and MIT with our social and communication tools to create what we hope is a true community learning experience.
Visit the course homepage to learn more. By the end of this course, you'll know several new techniques that will make you a Google Power Searcher and help you find out information about whatever you can imagine—from how to prepare for a new family pet to where moss grows on Stonehenge or how to grow katniss in your garden. Sign up now!
Cross-posted with the Official Google Blog
You may already be familiar with some shortcuts for Google Search, like using the search box as a calculator or finding local movie showtimes by typing [movies] and your zip code. But there are many more tips, tricks and tactics you can use to find exactly what you’re looking for, when you most need it.
Today, we’ve opened registration for Power Searching with Google, a free, online, community-based course showcasing these techniques and how you can use them to solve everyday problems. Our course is aimed at empowering you to find what you need faster, no matter how you currently use search. For example, did you know that you can search for and read pages written in languages you’ve never even studied? Identify the location of a picture your friend took during his vacation a few months ago? How about finally identifying that green-covered book about gardening that you’ve been trying to track down for years? You can learn all this and more over six 50-minute classes.
Lessons will be released daily starting on July 10, 2012, and you can take them according to your own schedule during a two-week window, alongside a worldwide community. The lessons include interactive activities to practice new skills, and many opportunities to connect with others using Google tools such as Google Groups, Moderator and Google+, including Hangouts on Air, where world-renowned search experts will answer your questions on how search works. Googlers will also be on hand during the course period to help and answer your questions in case you get stuck.
Power Searching with Google blends the MOOC (Massive Open Online Course) learning format pioneered by Stanford and MIT with our social and communication tools to create what we hope is a true community learning experience.
Visit the course homepage to learn more. By the end of this course, you'll know several new techniques that will make you a Google Power Searcher and help you find out information about whatever you can imagine—from how to prepare for a new family pet to where moss grows on Stonehenge or how to grow katniss in your garden. Sign up now!
sexta-feira, 15 de junho de 2012
Third Market Algorithms and Optimization Workshop at Google NYC
Posted by Nitish Korula and Vahab Mirrokni, Google Research, New York
There are fascinating algorithmic and game theoretic challenges in designing both Google’s internal systems and our core products facing hundreds of millions of users. For example, both Google AdWords and the Ad Exchange run billions of auctions a day; showing the perfect ad to every user requires simple mechanisms to align incentives while simultaneously optimizing efficiency and revenue.
We think that research in these areas benefits from close cooperation between academia and industry. To this end, last week we held the Third Market Algorithms and Optimization Workshop at Google, immediately after STOC 2012. We invited several leading academics in these fields to meet with researchers and engineers at Google for a day of talks and discussions.
As a recent winner of the Godel prize, Éva Tardos from Cornell led off with a discussion of how to achieve efficiency in sequential auctions where bidders arrive and depart one at a time instead of all bidding simultaneously.
Eyal Manor, Google engineering director for the Ad Exchange, gave an overview of the design and functioning of the exchange. This was an opportunity to have questions answered by the absolute expert, and the participants took full advantage of it!
Costis Daskalakis and Pablo Azar from MIT and Tim Roughgarden from Stanford talked about different aspects of Optimal Auctions in Bayesian Settings. Costis talked about efficient implementation of optimal auctions in a class of combinatorial auctions. Both Tim and Pablo discussed optimal auctions in Bayesian settings with limited information. Tim, our other Godel prize winner, promoted the idea of designing simple auction rules that are independent of the distributions of buyers’ valuations, and Pablo presented optimal auction rules using only the mean and standard deviation of buyers’ valuations.
Bobby Kleinberg from Cornell and Gagan Goel from Google NYC presented recent work on pricing with budget constraints. Bobby’s talk was about procurement auctions where the auctioneer acts as a buyer with a budget constraining her procurements. Gagan, on the other hand, discussed Pareto-optimal ascending auctions where the auctioneer is selling to budget-constrained buyers. This has direct applications in Google AdWords auctions as advertisers aim to increase performance while staying within budget constraints.
With our mission of organizing all the world’s information, Google needs superior algorithmic techniques to analyze extremely large data sets. We had two talks on new algorithmic ideas for Big Data. From academia, Andrew McGregor gave an introduction to the new field of graph sketching. Though a graph on n nodes is O(n^2)-dimensional, Andy described how to find interesting properties of the graph (such as connectivity, approximate Minimum Spanning Trees, etc.) using only O(n polylog(n)) bits of information. These algorithms were based on clever use of the homomorphic properties of random projections of the graph’s adjacency matrix. In the next talk, Mohammad Mahdian from Google MTV explained a new model for evolving data; even a ‘simple’ problem like sorting becomes interesting when the order of elements changes over time. Mohammad showed that even if element swaps occur at the same rate as comparisons, one can compute an ordering with Kendall-Tau distance O(n ln ln n) from the true ordering at any time, very close to the optimal Ω(n).
Later, Mukund Sundararajan from Google MTV discussed algorithmic problems in interpreting and presenting sales data to advertisers. He challenged us to design flexible human-friendly optimization algorithms that can be adopted and tuned by humans. Toward the end of the workshop, Varun Gupta, Google NYC postdoctoral researcher, gave a short presentation about the use of primal-dual techniques for online stochastic bin packing with application in assigning jobs to data centers.
We also discussed some of the main activities in the algorithms research group in New York, like the use of primal-dual techniques in online stochastic display ad allocation at Google and large-scale graph mining techniques based on MapReduce and Pregel. Corinna Cortes, Director of Research in New York, and Alfred Spector, VP of Research and Special Projects, gave short speeches. Corinna talked about our statistics, machine learning, and NLP research groups in New York, and Alfred challenged us to design mechanisms to take into account fairness in allocations and pricing. For more details, see the blog post by our colleague, ‘Muthu’ Muthukrishnan.
Part of what makes Google a fascinating place to work is the wealth of algorithmic and economic research challenges posed by Google advertising and large-scale data analysis systems. These challenges define research directions for the computer science and economics research communities. Workshops like this and our weekly research seminars help us continue collaborations between Google and academia. We hope to post videos of this workshop shortly, and look forward to organizing many more such events in the future.
There are fascinating algorithmic and game theoretic challenges in designing both Google’s internal systems and our core products facing hundreds of millions of users. For example, both Google AdWords and the Ad Exchange run billions of auctions a day; showing the perfect ad to every user requires simple mechanisms to align incentives while simultaneously optimizing efficiency and revenue.
We think that research in these areas benefits from close cooperation between academia and industry. To this end, last week we held the Third Market Algorithms and Optimization Workshop at Google, immediately after STOC 2012. We invited several leading academics in these fields to meet with researchers and engineers at Google for a day of talks and discussions.
As a recent winner of the Godel prize, Éva Tardos from Cornell led off with a discussion of how to achieve efficiency in sequential auctions where bidders arrive and depart one at a time instead of all bidding simultaneously.
Eyal Manor, Google engineering director for the Ad Exchange, gave an overview of the design and functioning of the exchange. This was an opportunity to have questions answered by the absolute expert, and the participants took full advantage of it!
Costis Daskalakis and Pablo Azar from MIT and Tim Roughgarden from Stanford talked about different aspects of Optimal Auctions in Bayesian Settings. Costis talked about efficient implementation of optimal auctions in a class of combinatorial auctions. Both Tim and Pablo discussed optimal auctions in Bayesian settings with limited information. Tim, our other Godel prize winner, promoted the idea of designing simple auction rules that are independent of the distributions of buyers’ valuations, and Pablo presented optimal auction rules using only the mean and standard deviation of buyers’ valuations.
Bobby Kleinberg from Cornell and Gagan Goel from Google NYC presented recent work on pricing with budget constraints. Bobby’s talk was about procurement auctions where the auctioneer acts as a buyer with a budget constraining her procurements. Gagan, on the other hand, discussed Pareto-optimal ascending auctions where the auctioneer is selling to budget-constrained buyers. This has direct applications in Google AdWords auctions as advertisers aim to increase performance while staying within budget constraints.
With our mission of organizing all the world’s information, Google needs superior algorithmic techniques to analyze extremely large data sets. We had two talks on new algorithmic ideas for Big Data. From academia, Andrew McGregor gave an introduction to the new field of graph sketching. Though a graph on n nodes is O(n^2)-dimensional, Andy described how to find interesting properties of the graph (such as connectivity, approximate Minimum Spanning Trees, etc.) using only O(n polylog(n)) bits of information. These algorithms were based on clever use of the homomorphic properties of random projections of the graph’s adjacency matrix. In the next talk, Mohammad Mahdian from Google MTV explained a new model for evolving data; even a ‘simple’ problem like sorting becomes interesting when the order of elements changes over time. Mohammad showed that even if element swaps occur at the same rate as comparisons, one can compute an ordering with Kendall-Tau distance O(n ln ln n) from the true ordering at any time, very close to the optimal Ω(n).
Later, Mukund Sundararajan from Google MTV discussed algorithmic problems in interpreting and presenting sales data to advertisers. He challenged us to design flexible human-friendly optimization algorithms that can be adopted and tuned by humans. Toward the end of the workshop, Varun Gupta, Google NYC postdoctoral researcher, gave a short presentation about the use of primal-dual techniques for online stochastic bin packing with application in assigning jobs to data centers.
We also discussed some of the main activities in the algorithms research group in New York, like the use of primal-dual techniques in online stochastic display ad allocation at Google and large-scale graph mining techniques based on MapReduce and Pregel. Corinna Cortes, Director of Research in New York, and Alfred Spector, VP of Research and Special Projects, gave short speeches. Corinna talked about our statistics, machine learning, and NLP research groups in New York, and Alfred challenged us to design mechanisms to take into account fairness in allocations and pricing. For more details, see the blog post by our colleague, ‘Muthu’ Muthukrishnan.
Part of what makes Google a fascinating place to work is the wealth of algorithmic and economic research challenges posed by Google advertising and large-scale data analysis systems. These challenges define research directions for the computer science and economics research communities. Workshops like this and our weekly research seminars help us continue collaborations between Google and academia. We hope to post videos of this workshop shortly, and look forward to organizing many more such events in the future.
quinta-feira, 14 de junho de 2012
Recap of NAACL-12 including two Best Paper awards for Googlers
Posted by Ryan McDonald, Research Scientist, Google Research
This past week, researchers from across the world descended on Montreal for the Conference of the North American Chapter of the Association for Computational Linguistics (NAACL). NAACL, as with other Association for Computational Linguistics meetings (ACL), is a premier meeting for researchers who study natural language processing (NLP). This includes applications such as machine translation and sentiment analysis, but also low-level language technologies such as the automatic analysis of morphology, syntax, semantics and discourse.
Like many applied fields in computer science, NLP underwent a transformation in the mid ‘90s from a primarily rule- and knowledge-based discipline to one whose methods are predominantly statistical and leverage advances in large data and machine learning. This trend continues at NAACL. Two common themes dealt with a historical deficiency of machine-learned NLP systems -- that they require expensive and difficult-to-obtain annotated data in order to achieve high accuracies. To this end, there were a number of studies on unsupervised and weakly-supervised learning for NLP systems, which aim to learn from large corpora containing little to no linguistic annotations, instead relying only on observed regularities in the data or easily obtainable annotations. This typically led to much talk during the question periods about how reliable it might be to use services such as Mechanical Turk to get the detailed annotations needed for difficult language prediction tasks. Multilinguality in statistical systems also appeared to be a common theme as researchers have continued to move their focus from building systems for resource-rich languages (e.g., English) to building systems for the rest of the world’s languages, many of which do not have any annotated resources. Work here included focused studies on single languages to studies aiming to develop techniques for a wide variety of languages leveraging morphology, parallel data and regularities across closely-related languages.
There was also an abundance of papers on text analysis for non-traditional domains. This includes the now standard tracks on sentiment analysis, but combined with this, a new focus on social-media, and in particular NLP for microblogs. There was even a paper on predicting whether a given bill will pass committee in the U.S. Congress based on the text of the bill. The presentation of this paper included the entire video on how a bill becomes a law.
There were two keynote talks. The first talk by Ed Hovy of the Information Sciences Institute of the University of Southern California was on “A New Semantics: Merging Propositional and Distributional Information.” Prof. Hovy gave his insights into the challenge of bringing together distributional (statistical) lexical-semantics and compositional semantics, which has been a need espoused recently by many leaders in the field. The second, by James W. Pennebaker, was called “A, is, I, and, the: How our smallest words reveal the most about who we are.” As a psychologist, Prof. Pennebaker represented the “outsider” keynote that typically draws a lot of interest from the audience, and he did not disappoint. Prof. Pennebaker spoke about how the use of function words can provide interesting social observations. One example was personal pronouns like “we,” whose increased usage now causes people to feel the speaker is colder and more distant as opposed to engaging the audience and making them appear accessible. This is partly due to a second and increasingly more common meaning of “we” that is much more like “you,” e.g., when a boss says: “We must increase sales”.
Finally, this year the organizers of NAACL decided to do something new called “NLP Idol.” The idea was to have four senior researchers in the community select a paper from the past that they think will have (or should have) more impact on future directions of NLP research. The idea is to pluck a paper from obscurity and bring it to the limelight. Each researcher presented their case and three judges gave feedback American Idol-style, with Brian Roark hosting a la Ryan Seacrest. The winner was "PAM - A Program That Infers Intentions," published in Inside Computer Understanding in 1981 by Robert Wilensky, which was selected and presented by Ray Mooney. PAM (“Plan Applier Mechanism”) was a system for understanding agents and their plans, and more generally, what is happening in a discourse and why. Some of the questions that PAM could answer were astonishing, which reminded the audience (or me at least) that while statistical methods have brought NLP broader coverage, this is often at the loss of specificity and deep knowledge representation that previous closed-world language understanding systems could achieve. This echoed sentiments in Prof. Hovy’s invited talk.
Ever since the early days of Google, Googlers have had a presence at NAACL and other ACL-affiliated events. NAACL this year was no different. Googlers authored three papers at the conference, one of which merited the conference’s Best Full Paper Award, and the other the Best Student Paper:
Cross-lingual Word Clusters for Direct Transfer of Linguistic Structure - IBM Best Student Paper
Award Oscar Täckström (Google intern), Ryan McDonald (Googler), Jakob Uszkoreit (Googler)
Vine Pruning for Efficient Multi-Pass Dependency Parsing - Best Full Paper Award
Alexander Rush (Google intern) and Slav Petrov (Googler)
Unsupervised Translation Sense Clustering
Mohit Bansal (Google intern), John DeNero (Googler), Dekang Lin (Googler)
Many Googlers were also active participants in the NAACL workshops, June 7 - 8:
Computational Linguistics for Literature
David Elson (Googler), Anna Kazantseva, Rada Mihalcea, Stan Szpakowicz
Automatic Knowledge Base Construction/Workshop on Web-scale Knowledge Extraction
Invited Speaker - Fernando Pereira, Research Director (Googler)
Workshop on Inducing Linguistic Structure
Accepted Paper - Capitalization Cues Improve Dependency Grammar Induction
Valentin I. Spitkovsky (Googler), Hiyan Alshawi (Googler) and Daniel Jurafsky
Workshop on Statistical Machine TranslationProgram
Committee members - Keith Hall, Shankar Kumar, Zhifei Li, Klaus Macherey, Wolfgang Macherey, Bob Moore, Roy Tromble, Jakob Uszkoreit, Peng Xu, Richard Zens, Hao Zhang (Googlers)
Workshop on the Future of Language Modeling for HLT
Invited Speaker - Language Modeling at Google, Shankar Kumar (Googler)
Accepted Paper - Large-scale discriminative language model reranking for voice-search
Preethi Jyothi, Leif Johnson (Googler), Ciprian Chelba (Googler) and Brian Strope (Googler)
First Workshop on Syntactic Analysis of Non-Canonical Language
Invited Speaker - Keith Hall (Googler)
Shared Task Organizers - Slav Petrov, Ryan McDonald (Googlers)
Evaluation Metrics and System Comparison for Automatic Summarization
Program Committee member - Katja Filippova (Googler)
This past week, researchers from across the world descended on Montreal for the Conference of the North American Chapter of the Association for Computational Linguistics (NAACL). NAACL, as with other Association for Computational Linguistics meetings (ACL), is a premier meeting for researchers who study natural language processing (NLP). This includes applications such as machine translation and sentiment analysis, but also low-level language technologies such as the automatic analysis of morphology, syntax, semantics and discourse.
Like many applied fields in computer science, NLP underwent a transformation in the mid ‘90s from a primarily rule- and knowledge-based discipline to one whose methods are predominantly statistical and leverage advances in large data and machine learning. This trend continues at NAACL. Two common themes dealt with a historical deficiency of machine-learned NLP systems -- that they require expensive and difficult-to-obtain annotated data in order to achieve high accuracies. To this end, there were a number of studies on unsupervised and weakly-supervised learning for NLP systems, which aim to learn from large corpora containing little to no linguistic annotations, instead relying only on observed regularities in the data or easily obtainable annotations. This typically led to much talk during the question periods about how reliable it might be to use services such as Mechanical Turk to get the detailed annotations needed for difficult language prediction tasks. Multilinguality in statistical systems also appeared to be a common theme as researchers have continued to move their focus from building systems for resource-rich languages (e.g., English) to building systems for the rest of the world’s languages, many of which do not have any annotated resources. Work here included focused studies on single languages to studies aiming to develop techniques for a wide variety of languages leveraging morphology, parallel data and regularities across closely-related languages.
There was also an abundance of papers on text analysis for non-traditional domains. This includes the now standard tracks on sentiment analysis, but combined with this, a new focus on social-media, and in particular NLP for microblogs. There was even a paper on predicting whether a given bill will pass committee in the U.S. Congress based on the text of the bill. The presentation of this paper included the entire video on how a bill becomes a law.
There were two keynote talks. The first talk by Ed Hovy of the Information Sciences Institute of the University of Southern California was on “A New Semantics: Merging Propositional and Distributional Information.” Prof. Hovy gave his insights into the challenge of bringing together distributional (statistical) lexical-semantics and compositional semantics, which has been a need espoused recently by many leaders in the field. The second, by James W. Pennebaker, was called “A, is, I, and, the: How our smallest words reveal the most about who we are.” As a psychologist, Prof. Pennebaker represented the “outsider” keynote that typically draws a lot of interest from the audience, and he did not disappoint. Prof. Pennebaker spoke about how the use of function words can provide interesting social observations. One example was personal pronouns like “we,” whose increased usage now causes people to feel the speaker is colder and more distant as opposed to engaging the audience and making them appear accessible. This is partly due to a second and increasingly more common meaning of “we” that is much more like “you,” e.g., when a boss says: “We must increase sales”.
Finally, this year the organizers of NAACL decided to do something new called “NLP Idol.” The idea was to have four senior researchers in the community select a paper from the past that they think will have (or should have) more impact on future directions of NLP research. The idea is to pluck a paper from obscurity and bring it to the limelight. Each researcher presented their case and three judges gave feedback American Idol-style, with Brian Roark hosting a la Ryan Seacrest. The winner was "PAM - A Program That Infers Intentions," published in Inside Computer Understanding in 1981 by Robert Wilensky, which was selected and presented by Ray Mooney. PAM (“Plan Applier Mechanism”) was a system for understanding agents and their plans, and more generally, what is happening in a discourse and why. Some of the questions that PAM could answer were astonishing, which reminded the audience (or me at least) that while statistical methods have brought NLP broader coverage, this is often at the loss of specificity and deep knowledge representation that previous closed-world language understanding systems could achieve. This echoed sentiments in Prof. Hovy’s invited talk.
Ever since the early days of Google, Googlers have had a presence at NAACL and other ACL-affiliated events. NAACL this year was no different. Googlers authored three papers at the conference, one of which merited the conference’s Best Full Paper Award, and the other the Best Student Paper:
Award Oscar Täckström (Google intern), Ryan McDonald (Googler), Jakob Uszkoreit (Googler)
Vine Pruning for Efficient Multi-Pass Dependency Parsing - Best Full Paper Award
Alexander Rush (Google intern) and Slav Petrov (Googler)
Unsupervised Translation Sense Clustering
Mohit Bansal (Google intern), John DeNero (Googler), Dekang Lin (Googler)
Many Googlers were also active participants in the NAACL workshops, June 7 - 8:
David Elson (Googler), Anna Kazantseva, Rada Mihalcea, Stan Szpakowicz
Automatic Knowledge Base Construction/Workshop on Web-scale Knowledge Extraction
Invited Speaker - Fernando Pereira, Research Director (Googler)
Workshop on Inducing Linguistic Structure
Accepted Paper - Capitalization Cues Improve Dependency Grammar Induction
Valentin I. Spitkovsky (Googler), Hiyan Alshawi (Googler) and Daniel Jurafsky
Workshop on Statistical Machine TranslationProgram
Committee members - Keith Hall, Shankar Kumar, Zhifei Li, Klaus Macherey, Wolfgang Macherey, Bob Moore, Roy Tromble, Jakob Uszkoreit, Peng Xu, Richard Zens, Hao Zhang (Googlers)
Workshop on the Future of Language Modeling for HLT
Invited Speaker - Language Modeling at Google, Shankar Kumar (Googler)
Accepted Paper - Large-scale discriminative language model reranking for voice-search
Preethi Jyothi, Leif Johnson (Googler), Ciprian Chelba (Googler) and Brian Strope (Googler)
First Workshop on Syntactic Analysis of Non-Canonical Language
Invited Speaker - Keith Hall (Googler)
Shared Task Organizers - Slav Petrov, Ryan McDonald (Googlers)
Evaluation Metrics and System Comparison for Automatic Summarization
Program Committee member - Katja Filippova (Googler)
segunda-feira, 11 de junho de 2012
2012 Google PhD Fellowships
Posted by Leslie Yeh Johnson, University Relations Manager
A doctoral degree is arguably the ultimate end goal of a modern education. But with the research opportunities now available in industry and the lure of the start-up, why do students pursue this advanced academic achievement? For many, it's the opportunity to explore a fascinating area in great depth. Computer Science is still a young, dynamic field where an innovative researcher might hit on something that can truly change the world.
Google’s global fellowship program was created to support those willing to take on this noble endeavor. This year, the fourth year of the program, we welcome two new regions and are delighted to be supporting 40 students’ graduate studies in Australia, Canada, China, Europe, India, and the United States. You can click here to see a list of all of our Google Fellowship recipients.
PhD students have a unique experience. They are intently focused on a specialized area of study, with a goal of producing tangible results in a defined timeframe. The process requires sophisticated knowledge of the domain, expert planning and problem-solving skills, and the ability to communicate their work and results through publications, conferences and ultimately, in authoring a book. These are highly transferable skills of great value, no matter what path the student chooses after graduate school.
Congratulations to our fellows; we applaud you on your chosen path and look forward to the accomplishments to come.
A doctoral degree is arguably the ultimate end goal of a modern education. But with the research opportunities now available in industry and the lure of the start-up, why do students pursue this advanced academic achievement? For many, it's the opportunity to explore a fascinating area in great depth. Computer Science is still a young, dynamic field where an innovative researcher might hit on something that can truly change the world.
Google’s global fellowship program was created to support those willing to take on this noble endeavor. This year, the fourth year of the program, we welcome two new regions and are delighted to be supporting 40 students’ graduate studies in Australia, Canada, China, Europe, India, and the United States. You can click here to see a list of all of our Google Fellowship recipients.
PhD students have a unique experience. They are intently focused on a specialized area of study, with a goal of producing tangible results in a defined timeframe. The process requires sophisticated knowledge of the domain, expert planning and problem-solving skills, and the ability to communicate their work and results through publications, conferences and ultimately, in authoring a book. These are highly transferable skills of great value, no matter what path the student chooses after graduate school.
Congratulations to our fellows; we applaud you on your chosen path and look forward to the accomplishments to come.
quarta-feira, 6 de junho de 2012
Hello science—meet HR
Posted by Jennifer Kurkoski, Ph.D., Manager, People & Innovation Lab
At Google we strive for innovation in all aspects of our business, and not just in the realm of technology: we apply science to organizational issues as well. But finding the right answers means asking the right questions—a skill at which academic researchers excel. Thus, a crucial piece of making science a part of HR involves sparking debate among academics and practitioners. To that end, Google’s People & Innovation Lab, or “PiLab,” hosted its 4th annual Research Summit at our headquarters in Mountain View, CA on May 10th and 11th.
Each year, the PiLab team hosts the Summit to bring social scientists from top universities together with key HR and business leaders from Google to examine complex issues like how to combat decision fatigue, how to provide incentives for creative work and how to further innovation by tapping diversity. The exchange of ideas during the Summit lays the foundation for future research.
How does this work in practice? One place we’ve explored is how to help Googlers save more for retirement. Working with attendees at past Summits, we looked at the language in our annual retirement contribution reminders to U.S.-based employees to figure out what would be most helpful. We found that small changes could influence Googlers’ savings decisions by providing them with numerical examples in the reminder emails. Googlers who received higher example savings rates subsequently contributed more to their retirement funds over time (we know Googlers’ savings because of our 401(k) matching program).
Through both internally-generated and collaborative efforts, the PiLab has conducted research that is changing the way Google as a company operates, including developing effective managers and encouraging healthy food choices. This year participants also applied their research prowess to a particularly critical Google function: making dinner (see picture). Our chefs provided instruction in making flat bread pizzas and everyone tried their hand at the task. Nothing prompts conversation like a good meal!
But what is the PiLab, you ask? The PiLab plays the unusual role of conducting applied research and development within People Operations, Google’s version of Human Resources. Doing R&D in HR isn’t a particularly common practice, but when your employees build virtual tours of the Amazon and tools to translate between 60+ languages, you need creative ways to think about productivity, performance, and employee development. The PiLab’s collection of industrial & organizational psychologists, decision scientists, and organizational sociologists have as their mission to conduct innovative research that transforms organizational practice within Google and beyond.
Additionally, the Summit provides us an opportunity to expose Googlers to cutting-edge research in the social sciences, and share the type of work the PiLab focuses on with the whole company. This year, Columbia University professor and Summit attendee Sheena Iyengar, gave a talk on The Art of Managing All Our Choices. Prof. Iyengar drew on her years of research into choice overload to address how to optimize product offerings in an era of increasing consumer choice.
By fostering conversations on the issues confronting modern organizations, the PiLab overall and the Summit in particular aim to generate new theories and to challenge existing ones. The intent is to inspire new research at Google and elsewhere and ultimately to improve HR. The Lab looks forward to more collaborations with faculty … and, of course, to pie.
At Google we strive for innovation in all aspects of our business, and not just in the realm of technology: we apply science to organizational issues as well. But finding the right answers means asking the right questions—a skill at which academic researchers excel. Thus, a crucial piece of making science a part of HR involves sparking debate among academics and practitioners. To that end, Google’s People & Innovation Lab, or “PiLab,” hosted its 4th annual Research Summit at our headquarters in Mountain View, CA on May 10th and 11th.
Each year, the PiLab team hosts the Summit to bring social scientists from top universities together with key HR and business leaders from Google to examine complex issues like how to combat decision fatigue, how to provide incentives for creative work and how to further innovation by tapping diversity. The exchange of ideas during the Summit lays the foundation for future research.
How does this work in practice? One place we’ve explored is how to help Googlers save more for retirement. Working with attendees at past Summits, we looked at the language in our annual retirement contribution reminders to U.S.-based employees to figure out what would be most helpful. We found that small changes could influence Googlers’ savings decisions by providing them with numerical examples in the reminder emails. Googlers who received higher example savings rates subsequently contributed more to their retirement funds over time (we know Googlers’ savings because of our 401(k) matching program).
Through both internally-generated and collaborative efforts, the PiLab has conducted research that is changing the way Google as a company operates, including developing effective managers and encouraging healthy food choices. This year participants also applied their research prowess to a particularly critical Google function: making dinner (see picture). Our chefs provided instruction in making flat bread pizzas and everyone tried their hand at the task. Nothing prompts conversation like a good meal!
But what is the PiLab, you ask? The PiLab plays the unusual role of conducting applied research and development within People Operations, Google’s version of Human Resources. Doing R&D in HR isn’t a particularly common practice, but when your employees build virtual tours of the Amazon and tools to translate between 60+ languages, you need creative ways to think about productivity, performance, and employee development. The PiLab’s collection of industrial & organizational psychologists, decision scientists, and organizational sociologists have as their mission to conduct innovative research that transforms organizational practice within Google and beyond.
Additionally, the Summit provides us an opportunity to expose Googlers to cutting-edge research in the social sciences, and share the type of work the PiLab focuses on with the whole company. This year, Columbia University professor and Summit attendee Sheena Iyengar, gave a talk on The Art of Managing All Our Choices. Prof. Iyengar drew on her years of research into choice overload to address how to optimize product offerings in an era of increasing consumer choice.
By fostering conversations on the issues confronting modern organizations, the PiLab overall and the Summit in particular aim to generate new theories and to challenge existing ones. The intent is to inspire new research at Google and elsewhere and ultimately to improve HR. The Lab looks forward to more collaborations with faculty … and, of course, to pie.
terça-feira, 5 de junho de 2012
Burning CDs and DVDs over RDP
When using a remote connection to Windows Vista or Windows 7, you can no longer burn CDs or DVDs in any application (e.g. Nero Burning ROM, Windows Media Player, Windows Explorer, ...). Nero Burning ROM mentions that this is a security measure in Windows Vista. Connecting to the console session (/console or /admin, depending on your operating system and service pack) doesn't help.
In Windows XP, there are no problems burning CDs or DVDs in an RDP session.
You can bypass this new security measure by changing the Group Policy settings:
In Windows XP, there are no problems burning CDs or DVDs in an RDP session.
You can bypass this new security measure by changing the Group Policy settings:
- Press "Windows-R" to open the Run dialog, and start "gpedit.msc".
- In the treeview of the Group Policy editor window, descend to:
- Computer Configuration
- Administrative Templates
- System
- Removable Storage Access
- In the right pane, double click on "All Removable Storage: Allow direct access in remote sessions"
- Select "Enable"
- Log off and log on again, and from now on, writing CDs and DVDs will be possible in all applications supporting it.
segunda-feira, 4 de junho de 2012
Research at Google on G+: Featuring Excellent Papers for 2011
Posted by Corinna Cortes, Google Research
In March, we announced on the blog our Excellent Papers for 2011. Chosen papers comprise a tiny fraction of our total publications and were selected for their outstanding contributions to a diverse range of disciplines across the computer science field. In the past, we have offered more detailed discussions of each featured paper in subsequent postings. We are pleased to be able to continue this tradition through our Research at Google page on G+, which we unveiled last month.
Just as our publications highlight technical and algorithmic advances, share lessons we’ve learned as we developed our products and services, and denote some of the technical challenges we face, our Research at Google G+ page will continue the communication in a format that is better for mutual interaction. Add Research at Google to your circles to learn more about our research agenda, technology behind products, and innovative developments across the broader academic and technical community.
This week, we picked up on our excellent papers recognition with a deep dive into Cascades of two-pole–two-zero asymmetric resonators are good models of peripheral auditory function, by Dick Lyon, Research Scientist. Tune into G+ regularly to learn more about the papers you’re most interested in.
In March, we announced on the blog our Excellent Papers for 2011. Chosen papers comprise a tiny fraction of our total publications and were selected for their outstanding contributions to a diverse range of disciplines across the computer science field. In the past, we have offered more detailed discussions of each featured paper in subsequent postings. We are pleased to be able to continue this tradition through our Research at Google page on G+, which we unveiled last month.
Just as our publications highlight technical and algorithmic advances, share lessons we’ve learned as we developed our products and services, and denote some of the technical challenges we face, our Research at Google G+ page will continue the communication in a format that is better for mutual interaction. Add Research at Google to your circles to learn more about our research agenda, technology behind products, and innovative developments across the broader academic and technical community.
This week, we picked up on our excellent papers recognition with a deep dive into Cascades of two-pole–two-zero asymmetric resonators are good models of peripheral auditory function, by Dick Lyon, Research Scientist. Tune into G+ regularly to learn more about the papers you’re most interested in.
History of C++
A funny view on the history of C++ on a "Lord of the Rings" kind of map.
Very up to date, with the features of the new C++11 standard like lambda expressions and the auto keyword. And a nod to the deprecated "auto_ptr" way of defining a smart pointer, and the pioneers of the boost libraries...
Worldwide coverage of mobile networks
An interesting site with a list of all countries, and their coverage on GSM 900/1800, GSM 850/1900, 3G and LTE/WiMAX/4G.
http://www.worldtimezone.com/gsm.html
Very useful to verify if your (non-quad-band) mobile phone will work when you are abroad.
http://www.worldtimezone.com/gsm.html
Very useful to verify if your (non-quad-band) mobile phone will work when you are abroad.
Assinar:
Postagens (Atom)