Google has filed a patent for a needle-free blood drawing system for people who frequently test their blood levels, of which the device is called “”Needle-Free Blood Draw” and can penetrate the skin without the needle. Such an application might be used to draw a small amount of blood, for example, for a glucose test.
The patent suggests that the device works by firing a microparticle into the skin using a high powered gas barrel. Thanks to the negative pressure, the device is able to collect a small amount of blood from the skin at the point where the microparticle entered – meaning no needles are used in the entire process.
This isn’t the first device Google has been working on that is aimed at the 9% of adults aged 18+ who have diabetes; Google Life Sciences – once a division of Google X until the Alphabet restructuring – is working on contact lenses that can measure a patient’s blood sugar levels by analyzing their tears. They are also making a bandage-sized, cloud-connected sensor to help people monitor their glucose levels.
Google is taking its virtual reality efforts to the next level with the launch of a new app called Cardboard Camera, which enables Android users to create their own virtual reality content using the cameras on their phones.
With the app, you can just hold out your phone and movie around you in a circle. Then, when you put your phone in a Google Cardboard viewer, you can experience the photo in virtual reality. The photos are 3D panoramas that provide “slightly different” views for each of your eyes. This makes it so that near things look near and far things look far. You can look around to explore the image in all directions, and even record sound with your photo to hear the moment exactly as it happened. With Cardboard Camera, anyone can create their own VR experience.
Offstage, in a barren conference room at the Paris climate talks, Bill Gates excitedly described the possibility of generating energy through the long-speculated process of artificial photosynthesis, using the energy of sunshine to produce liquid hydrocarbons that could challenge the supremacy of fossil fuels.
Gates was in Paris to push his latest bit of entrepreneurial philanthropy: the Breakthrough Energy Coalition, an informal club of 28 private investors from around the world, including several hedge fund billionaires who have agreed to follow his lead and pump seed money into energy research and development. Gates believes the energy sector suffers from a dearth of such funding, the reason much of the world is still burning coal for its power.
A readiness to put another billion dollars of his own money into what is already a roughly billion-dollar portfolio of energy investments was also enough for Gates to convince 20 governments to commit to doubling their own R&D investments within five years.
Facebook CEO Mark Zuckerberg and his wife, recently welcomed a baby girl. And as that announcement was made, the two have pledged to give 99% of their Facebook shares to “join many others in improving this world for the next generation.” Together, the couple’s shares currently amount to $45 billion.
They are forming a new organization, called the Chan Zuckerberg Initiative, that will pursue those goals through a combination of charitable donations, private investment and promotion of government-policy reform.
We all know that computers are very fast, and precise about doing calculations, but what if they could also sense the emotional state of the person using the technology? This field is called affective computing, and soon it will be an important factor in the way people and computers communicate with each other. This theme is explored in the 2015 television series “Humans.”
Computers will interpret your body language to determine how you are feeling and then tailor its response intuitively, just as we do with each other. What makes it even more applicable, is that it is far more intuitive than the keyboard, mouse and touch screen as an input method.
Non-verbal communication is still the principal way that we get information from each other, with around 70% of a message’s content being conveyed by body language, about 20% by tone of voice and only 10% by words. Affective computing allows humans and computers to go beyond keyboards and use these rich, non-verbal channels of communication to good effect.
Emotions can be read by much the same process that humans do. It begins by connecting an array of sensors (cameras, microphones, skin conductivity devices) to a computer that gathers varied information about facial expression, posture, gesture, tone of voice and more. Software then processes the data, and by referencing a database of known patterns it is able to categorize different emotions from the sensors.
Can you imagine the possibilities from this type of technology? It will be interesting to see how this develops into the future, and how broad of a range it can become implemented in technologies that already exist!
Engineers at the University of Toronto are “unrolling” the mysteries of cancer… literally. They have developed a way to grow cancer cells in the form of a rolled-up sheet that mimics the 3D environment of a tumor, yet can also be taken apart in seconds. The platform offers a way to speed up the development of new drugs and therapies and ask new questions about how cancer cells behave.
The difficulties of studying cancer cells in a traditional petri dish are well known. Growing tumors in petri dishes is a standard approach for this kind of work, but it has a problem: in a real tumor, cells near the center of the mass have less access to oxygen and nutrients, and these subtle differences are tough to replicate in a flat dish. Another approach, growing cancer cells on building blocks made of porous sponge, results in a 3D model with differing oxygen levels but leaves researchers with discontinuous blocks of cells to keep track of.
The rolled-up cancer strip, on the other hand, is essentially a 3D model that can be laid out in 2D. Its cells get less and less oxygen along the strip on a smooth gradient towards the center of the device, making it easier to analyze. Because of this, it can also be a boon for basic research into what makes a normal cell turn cancerous.
Personalized cancer treatment is a growing field. At Mount Sinai Hospital in New York, fruit flies are being modified to have the same genetic defects as individual cancer patients, so they can be tested for cancer treatments that might work on the patient.
It’s no secret that social media has influenced how we communicate. There are people that take to social media to announce EVERYTHING that happens in their lives, while others remain quieter. One phrase has been incorporated into the English lingo: “So have you made it ‘Facebook’ official?” This is of course reference to letting the world know that you are in a relationship. Conversely, Facebook is testing a new feature that lets you “take a break” if you break-up with that partner, making it so you see a lot less about what their life is like without you.
The features allow Facebook users to hide a former partner’s posts and profile; edit past updates in which both people are tagged; and control the status updates, photos, and other content their ex-lover will be able to view after the breakup
People in the United States will be prompted to test these features if they change their relationship status. Other users won’t be told if someone uses the utilities; the point of hiding someone’s profile or posts is to make it easier to do so without un-friending or blocking that person, and the other features are equally discreet.
Introducing these features is an implied admission of two things: There are real risks connected to using a service where people are encouraged to share everything about their daily lives, and not everything posted to Facebook has to be positive. These new features can make it easier for people leaving toxic relationships to protect themselves. Not having to see an abuser without having to block them, which could make them angry, is a valuable ability. Being able to hide new posts could help address the same issue.
Google+ is still trucking along as a social network, as Google announced a redesign for the site that focuses purely on the social aspects and moves the site from being people-based to focus more on “interests.”
The new design brings a splash of color and more of Google’s Material Design aesthetic to the desktop site. The whole thing looks a lot more like the mobile app. The header has changed from a boring gray to a bright red, and the mobile app’s floating circular button even makes an appearance as the new way to write a post. The “core” of the site looks pretty much the same—text and images inside a scrolling list of cards.
Narrowing the focus of Google+ was probably the best way for Google to salvage the service. It originally started life as a Facebook-style social network for posting links, photos, status updates, and more with your friends. The original big innovation was the concept of dividing the people you followed on Google+ into “circles” and then sharing content with just the relevant groups of people, but it failed to catch on with users. However, there’s no doubt that some good things came out of Google+ as well — particularly the excellent Google Photos project that the company separated out of Google+ back at I/O this year. With the change, Google+ will formally be less about interacting with your friends and more about finding topics that interest you and meeting people across the internet who have those same interests.
Following the horrific attacks in Paris that has claimed more than 100 lives, people around the world took to social media looking for their loved ones. Social media has put forth tools to help people in times of crisis.
Facebook activated its Safety Check tool, which allows users in an area affected by a crisis to mark themselves or others as safe. Facebook created the tool to help in times of crisis, and it has activated it five times in the past year after natural disasters.
Twitter kept followers informed by highlighting top news tweets, as well as well wishes posted by people around the world. It also turned into a message board Friday night with information to help people in Paris get to safety. The hashtag #PorteOuverte or “open door,” became a vehicle for offering shelter to those in Paris who needed it. Twitter has revealed that 1 million tweets were associated with the hashtag in 10 hours. The hashtag #StrandedInUS gained a lot of traction in the United States to help French people whose flights had been canceled.
There is a lot of talk regarding privacy and more people are concerned about their searchable online data. Aware of this change in the behavior of its users, Google has recently released a new tool to help control online privacy, called “About me”.
Users can adjust their personal and work contact information, education and employment history as well as the places they have lived. It is also possible to control who sees gender, birthday, occupation, personal websites and social network URLs.
Google explains that all the content on the About Me page is “information that people explicitly provided to Google.” Also noting that “people have control over what information is here and on the About Me page, they can control what others see about them across Google Services.”
Machine learning, a type of artificial intelligence that employs software to interpret and make predictions from large sets of data, is in popular demand in Silicon Valley. Some of the largest of those companies such as Microsoft, Facebook, and Apple have thrown their hat into the ring. But it was Google that started the trend, and in order to remain innovative, Google needed to keep looking like the cutting-edge leader.
Hence TensorFlow, a machine-learning system that Google has used internally for a few years. Today, Google is taking it open source, releasing the software parameters to fellow engineers, academics and hacks with enough coding skills. There is no denying that learning systems have made it possible to create and improve apps when it comes to speech and image recognition technologies.
For example, Google Photos have benefitted from their own machine learning system, called DisBelief. Developed in 2011, DisBelief has helped Google build large neural networks, but it has its limitations, including difficult configurations and its inability to share code externally. As a result, the company has open-sourced TensorFlow, which was designed to fix the shortcomings of DisBelief. However, it’s important to note that it only allows for part of the AI engine to be open-sourced.
By releasing TensorFlow, Google aims to make the software it built to develop and run its own AI systems a part of the standard toolset used by researchers. It may also help Google identify potential talent for the future.