Blog of the website «TechCrunch» Прогноз погоды

People

John Smith

John Smith, 48

Joined: 28 January 2014

Interests: No data

Jonnathan Coleman

Jonnathan Coleman, 32

Joined: 18 June 2014

About myself: You may say I'm a dreamer

Interests: Snowboarding, Cycling, Beer

Andrey II

Andrey II, 41

Joined: 08 January 2014

Interests: No data

David

David

Joined: 05 August 2014

Interests: No data

David Markham

David Markham, 65

Joined: 13 November 2014

Interests: No data

Michelle Li

Michelle Li, 41

Joined: 13 August 2014

Interests: No data

Max Almenas

Max Almenas, 53

Joined: 10 August 2014

Interests: No data

29Jan

29Jan, 31

Joined: 29 January 2014

Interests: No data

s82 s82

s82 s82, 26

Joined: 16 April 2014

Interests: No data

Wicca

Wicca, 36

Joined: 18 June 2014

Interests: No data

Phebe Paul

Phebe Paul, 26

Joined: 08 September 2014

Interests: No data

Артем Ступаков

Артем Ступаков, 93

Joined: 29 January 2014

About myself: Радуюсь жизни!

Interests: No data

sergei jkovlev

sergei jkovlev, 59

Joined: 03 November 2019

Interests: музыка, кино, автомобили

Алексей Гено

Алексей Гено, 8

Joined: 25 June 2015

About myself: Хай

Interests: Интерес1daasdfasf, http://apple.com

ivanov5056 Ivanov

ivanov5056 Ivanov, 69

Joined: 20 July 2019

Interests: No data



Main article: Cognitive science

<< Back Forward >>
Topics from 1 to 10 | in all: 17

You can now get your own artistic portrait in the style of a master thanks to AI

19:44 | 22 July

A new project called AI Portraits from MIT’s IBM Watson AI Lab can capture your essence in the artistic style of some of the world’s great masters, using AI powered by a generative adversarial network (GAN) to actually rebuild your mug from scratch, which goes above and beyond the old trick may other apps used previously of essentially paintifying your existing photo.

The tool, as described by The Verge, uses a database of over 45,000 iconic portraits from history, including works in various media. The AI will then determine which style to employ based on the specific portrait you upload, and reconstruct your image in an approximation that proves very faithful to the originals upon which they’re based. Different photos will provide different results in terms of style used, so be sure to upload a few.

Screen Shot 2019 07 22 at 9.24.52 AM

Researchers also note that your sources images are deleted immediately once they’re used to generate the results after you upload, so there should be no privacy issues here. The website does have a steady stream of generated photos uploaded by other users, but the digital paintings aren’t really photo-accurate anyway – that’s the point. Plus there’s not identify information attached to the AI-generated images.

ai portrait gan progress

This is, overall, a super interesting way to kill some time and get some interesting, if a bit pretentious profile photos for your various online presences. It may throw an error because it’s experiencing a high volume of interest right now, but just reload and try again and you should get a result without issue the next time around.

Also, a bit of a bummer note – it doesn’t work on dogs or cats, at least in my testing.

 


0

Join us in Las Vegas during CES

20:57 | 21 December

We will be holding a small event during CES in Las Vegas and we want to see you! We’re looking to meet some cool hardware and crypto startups so the good folks at Work In Progress have opened up their space to us and 200 of you all to hold a meetup and pitch-off.

The event will be held at Work In Progress, 317 South 6th Street on Wed, January 9, 2019 between 6:00 PM – 9:00 PM PST.

There are only 200 tickets so if you want to come please pick one up ASAP. The meetup is open to everyone so head over if you’d like to talk tech. You can pick up a ticket here.

If you’d like to pitch at the event I’ll be picking ten companies who will have three minutes to pitch without slides. Since this is a hardware event I recommend bringing a few of your items to show off. If you’d like to pitch, fill this out and I will contact those who will be coming up on stage.

See you in the Big Easy!

 


0

How machine learning systems sometimes surprise us

13:43 | 13 November

This simple spreadsheet of machine learning foibles may not look like much but it’s a fascinating exploration of how machines “think.” The list, compiled by researcher Victoria Krakovna, describes various situations in which robots followed the spirit and the letter of the law at the same time.

For example, in the video below a machine learning algorithm learned that it could rack up points not by taking part in a boat race but by flipping around in a circle to get points. In another simulation “where survival required energy but giving birth had no energy cost, one species evolved a sedentary lifestyle that consisted mostly of mating in order to produce new children which could be eaten (or used as mates to produce more edible children).” This led to what Krakovna called “indolent cannibals.”

It’s obvious that these machines aren’t “thinking” in any real sense but when given parameters and a the ability to evolve an answer, it’s also obvious that these robots will come up with some fun ideas. In other test, a robot learned to move a block by smacking the table with its arm and still another “genetic algorithm [was] supposed to configure a circuit into an oscillator, but instead [made] a radio to pick up signals from neighboring computers.” Another cancer-detecting system found that pictures of malignant tumors usually contained rulers and so gave plenty of false positives.

Each of these examples shows the unintended consequences of trusting machines to learn. They will learn but they will also confound us. Machine learning is just that – learning that is understandable only by machines.

One final example: in a game of Tetris in which a robot was required to “not lose” the program pauses “the game indefinitely to avoid losing.” Now it just needs to throw a tantrum and we’d have a clever three-year-old on our hands.

 


0

New tech lets robots feel their environment

00:30 | 16 October

A new technology from researchers at Carnegie Mellon University will add sound and vibration awareness to create truly context-aware computing. The system, called Ubicoustics, adds additional bits of context to smart device interaction, allowing a smart speaker to know its in a kitchen or a smart sensor to know you’re in a tunnel versus on the open road.

“A smart speaker sitting on a kitchen countertop cannot figure out if it is in a kitchen, let alone know what a person is doing in a kitchen,” said Chris Harrison a researcher at CMU’s Human-Computer Interaction Institute. “But if these devices understood what was happening around them, they could be much more helpful.”

The first implementation of the system uses built-in speakers to create “a sound-based activity recognition.” How they are doing this is quite fascinating.

“The main idea here is to leverage the professional sound-effect libraries typically used in the entertainment industry,” said Gierad Laput, a Ph.D. student. “They are clean, properly labeled, well-segmented and diverse. Plus, we can transform and project them into hundreds of different variations, creating volumes of data perfect for training deep-learning models.”

From the release:

Laput said recognizing sounds and placing them in the correct context is challenging, in part because multiple sounds are often present and can interfere with each other. In their tests, Ubicoustics had an accuracy of about 80 percent — competitive with human accuracy, but not yet good enough to support user applications. Better microphones, higher sampling rates and different model architectures all might increase accuracy with further research.

In a separate paper, HCII Ph.D. student Yang Zhang, along with Laput and Harrison, describe what they call Vibrosight, which can detect vibrations in specific locations in a room using laser vibrometry. It is similar to the light-based devices the KGB once used to detect vibrations on reflective surfaces such as windows, allowing them to listen in on the conversations that generated the vibrations.

This system uses a low-power laser and reflectors to sense whether an object is on or off or whether a chair or table has moved. The sensor can monitor multiple objects at once and the tags attached to the objects use no electricity. This would let a single laser monitor multiple objects around a room or even in different rooms, assuming there is line of sight.

The research is still in its early stages but expect to see robots that can hear when you’re doing the dishes and, depending on their skills, hide or offer to help.

 


0

This tech (scarily) lets video change reality

20:09 | 11 September

Researchers at Carnegie Mellon University have created a method to turn one video into the style of another. While this might be a little unclear at first, take a look at the video below. In it, the researchers have taken an entire clip from John Oliver and made it look like Stephen Colbert said it. Further, they were able to mimic the motion of a flower opening with another flower.

In short, they can make anyone (or anything) look like they are doing something they never did.

“I think there are a lot of stories to be told,” said CMU Ph.D. student Aayush Bansal. He and the team created the tool to make it easier to shoot complex films, perhaps by replacing the motion in simple, well-lit scenes and copying it into an entirely different style or environment.

“It’s a tool for the artist that gives them an initial model that they can then improve,” he said.

The system uses something called generative adversarial networks (GANs) to move one style of image onto another without much matching data. GANs, however, create many artifacts that can mess up the video as it is played.

In a GAN, two models are created: a discriminator that learns to detect what is consistent with the style of one image or video, and a generator that learns how to create images or videos that match a certain style. When the two work competitively — the generator trying to trick the discriminator and the discriminator scoring the effectiveness of the generator — the system eventually learns how content can be transformed into a certain style.

The researchers created something called Recycle-GAN that reduces the imperfections by “not only spatial, but temporal information.”

“This additional information, accounting for changes over time, further constrains the process and produces better results,” wrote the researchers.

Recycle-GAN can obviously be used to create so-called Deepfakes, allowing for nefarious folks to simulate someone saying or doing something they never did. Bansal and his team are aware of the problem.

“It was an eye opener to all of us in the field that such fakes would be created and have such an impact. Finding ways to detect them will be important moving forward,” said Bansal.

 


0

TechCrunch Disrupt SF 2018 dives deep into artificial intelligence and machine learning

19:54 | 23 August

As fields of research, machine learning and artificial intelligence both date back to the 50s. More than half a century later, the disciplines have graduated from the theoretical to practical, real world applications. We’ll have some of the top minds in both categories to discuss the latest advances and future of AI and ML on stage and Disrupt San Francisco in early September.

For the first time, Disrupt SF will be held in San Francisco’s Moscone Center. It’s a huge space, which meant we could dramatically increase the amount of programming offered to attendees. And we did. Here’s the agenda. Tickets are still available even though the show is less than two weeks away. Grab one here.

The show features the themes currently facing the technology world including artificial intelligence and machine learning. Some of the top minds in AI and ML are speaking on several stages and some are taking audience questions. We’re thrilled to be joined by Dr. Kai-Fu Lee, former president of Google China and current CEO of Sinovation Ventures, Colin Angle, co-founder and CEO of iRobots, Claire Delaunay, Nvidia VP of Engineering, and among others, Dario Gil, IBM VP of AI.

Dr. Kai-Fu Lee is the CEO and chairman of Sinovation, a venture firm based in the U.S. and China, and he has emerged as one of the world’s top prognosticators on artificial intelligence and how the technology will disrupt just about everything. Dr. Lee wrote in The New York Times last year that AI is “poised to bring about a wide-scale decimation of jobs — mostly lower-paying jobs, but some higher-paying ones, too.” Dr. Lee will also be on our Q&A stage (after his interview on the Main Stage) to take questions from attendees.

Colin Angle co-founded iRobot with fellow MIT grads Rod Brooks and Helen Greiner in 1990. Early on, the company provided robots for military applications, and then in 2002, introduced the consumer-focused Roomba. Angle has plenty to talk about. As the CEO and Chairman of iRobot, he led the company through the sale of its military branch in 2016 so the company can focus on robots in homes. If there’s anyone that knows how to both work with the military and manage consumers’ expectations with household robots, it’s Colin Angle and we’re excited to have him speaking at the event where he will also take questions from the audience on the Q&A stage.

Claire Delaunay is vice president of engineering at Nvidia, where she is responsible for the Isaac robotics initiative and leads a team to bring Isaac to market for roboticists and developers around the world. Prior to joining Nvidia, Delaunay was the director of engineering at Uber, after it acquired Otto, the startup she co-founded. She was also the robotics program lead at Google and founded two companies, Botiful and Robotics Valley. Delaunay will also be on our Q&A stage (after his interview on the Main Stage) to take questions from attendees.

Dario Gil, the head of IBM’s AI research efforts and quantum computing program, is coming to Disrupt Sf to talk about the current state of quantum computing. We may even see a demo or two of what’s possible today and use that to separate hype from reality. Among the large tech firms, IBM — and specifically the IBM Q lab — has long been at the forefront of the quantum revolution. Last year, the company showed off its 50-qubit quantum computer and you can already start building software for it using the company’s developer kit.

Sam Liang is the CEO/Co-Founder of AISense Inc, based in Silicon Valley. Funded by Horizons Ventures (DeepMind, Waze, Zoom, Facebook), Tim Draper, David Cheriton of Stanford (first investor in Google), etc. AISense has created Ambient Voice Intelligence™ technologies with deep learning that understands human-to-human conversations. Its Otter.ai product digitizes all voice meetings and video conferences, makes every conversation searchable and also provides speech analytics and insights. Otter.ai is the exclusive provider of automatic meeting transcription for Zoom Video Communications.

Laura Major is the Vice President of Engineering at CyPhy Works, where she leads R&D, product design and development and manages the multi-disciplinary engineering team. Prior to joining CyPhy Works, she worked at Draper Laboratory as a division lead and developed the first human-centered engineering capability and expanded it to included machine intelligence and AI. Laura also grew multiple programs and engineering teams to contribute to the development and expansion of ATAK, which is now in wide use across the military.

Dr. Jason Mars founded and runs Clinc to try to close the gap in conversational AI by emulating human intelligence to interpret unstructured, unconstrained speech. AI has the potential to change everything, but there is a fundamental disconnect between what AI is capable of and how we interface with it. Clinc is currently targeting the financial market, letting users converse with their bank account using natural language without any pre-defined templates or hierarchical voice menus. At Disrupt SF, Mars is set to debut other ways that Clinc’s conversational AI can be applied. Without ruining the surprise, let me just say that this is going to be a demo you won’t want to miss. After the demo, he will take questions on the Q&A stage.

Chad Rigetti, the namesake founder of Rigetti Computing, will join us at Disrupt SF 2018 to explain Rigetti’s approach to quantum computing. It’s two-fold: on one front, the company is working on the design and fabrication of its own quantum chips; on the other, the company is opening up access to its early quantum computers for researchers and developers by way of its cloud computing platform, Forest. Rigetti Computing has raised nearly $70 million to date according to Crunchbase, with investment from some of the biggest names around. Meanwhile, labs around the country are already using Forest to explore the possibilities ahead.

Kyle Vogt co-founded and eventually sold Cruise Automation to General Motors in 2016. He stuck around after the sale and still leads the company today. Since selling the company to GM, Cruise has scaled rapidly and seemed to maintain a scrappy startup feel though now a division of a massive corporation. The company had 30 self-driving test cars on the road in 2016 and later rolled out a high-definition mapping system. In 2017 the company started running an autonomous ride-hailing service for its employees in San Francisco, later announcing its self-driving cars would hit New York City. Recently SoftBank’s Vision Fund invested $2.25 billion in GM Cruise Holdings LLC and when the deal closes, GM will invest an additional $1.1 billion. The investments are expected to inject enough capital into Cruise for the unit to reach commercialization at scale beginning in 2019.

 


0

VR helps us remember

14:21 | 14 June

Researchers at the University of Maryland have found that people remember information better if it is presented in VR vs. on a two dimensional personal computer. This means VR education could be an improvement on tablet or device-based learning.

“This data is exciting in that it suggests that immersive environments could offer new pathways for improved outcomes in education and high-proficiency training,” said Amitabh Varshney, dean of the College of Computer, Mathematical, and Natural Sciences at UMD.

The study was quite complex and looked at recall in forty subjects who were comfortable with computers and VR. The researchers was an 8.8 percent improvement in recall.

To test the system they created a “memory palace” where they placed various images. This sort of “spatial mnemonic encoding” is a common memory trick that allows for better recall.

“Humans have always used visual-based methods to help them remember information, whether it’s cave drawings, clay tablets, printed text and images, or video,” said lead researcher Eric Krokos. “We wanted to see if virtual reality might be the next logical step in this progression.”

From the study:

Both groups received printouts of well-known faces–including Abraham Lincoln, the Dalai Lama, Arnold Schwarzenegger and Marilyn Monroe–and familiarized themselves with the images. Next, the researchers showed the participants the faces using the memory palace format with two imaginary locations: an interior room of an ornate palace and an external view of a medieval town. Both of the study groups navigated each memory palace for five minutes. Desktop participants used a mouse to change their viewpoint, while VR users turned their heads from side to side and looked up and down.

Next, Krokos asked the users to memorize the location of each of the faces shown. Half the faces were positioned in different locations within the interior setting–Oprah Winfrey appeared at the top of a grand staircase; Stephen Hawking was a few steps down, followed by Shrek. On the ground floor, Napoleon Bonaparte’s face sat above majestic wooden table, while The Rev. Martin Luther King Jr. was positioned in the center of the room.

Similarly, for the medieval town setting, users viewed images that included Hillary Clinton’s face on the left side of a building, with Mickey Mouse and Batman placed at varying heights on nearby structures.

Then, the scene went blank, and after a two-minute break, each memory palace reappeared with numbered boxes where the faces had been. The research participants were then asked to recall which face had been in each location where a number was now displayed.

The key, say the researchers, was for participants to identify each face by its physical location and its relation to surrounding structures and faces–and also the location of the image relative to the user’s own body.

Desktop users could perform the feat but VR users performed it statistically better, a fascinating twist on the traditional role of VR in education. The researchers believe that VR adds a layer of reality to the experience that lets the brain build a true “memory palace” in 3D space.

“Many of the participants said the immersive ‘presence’ while using VR allowed them to focus better. This was reflected in the research results: 40 percent of the participants scored at least 10 percent higher in recall ability using VR over the desktop display,” wrote the researchers.

“This leads to the possibility that a spatial virtual memory palace–experienced in an immersive virtual environment–could enhance learning and recall by leveraging a person’s overall sense of body position, movement and acceleration,” said researcher Catherine Plaisant.

 


0

Empathy technologies like VR, AR, and social media can transform education

00:00 | 23 April

Jennifer Carolan Contributor
Jennifer Carolan is a general partner and co-founder of Reach Capital.

In The Better Angels of Our Nature, Harvard psychologist Steven Pinker makes the case for reading as a “technology for perspective-taking” that has the capacity to not only evoke people’s empathy but also expand it. “The power of literacy,” as he argues  “get[s] people in the habit of straying from their parochial vantage points” while “creating a hothouse for new ideas about moral values and the social order.”

The first major empathy technology was Guttenberg’s printing press, invented in 1440. With the mass production of books came widespread literacy and the ability to inhabit the minds of others. While this may sound trite, it was actually a seismic innovation for people in the pre-industrial age who didn’t see, hear or interact with those outside of their village. More recently, other technologies like television and virtual reality made further advances, engaging more of the senses to deepen the simulated human experience.

We are now on the cusp of another breakthrough in empathy technologies that have their roots in education. Empathy technologies expand our access to diverse literature, allow us to more deeply understand each other and create opportunities for meaningful collaboration across racial, cultural, geographic and class backgrounds. The new empathy technologies don’t leave diversity of thought to chance rather they intentionally build for it.

Demand for these tools originates from educators both in schools and corporate environments who have a mandate around successful collaboration. Teachers who are on the front lines of this growing diversity consider it their job to help students and employees become better perspective-takers.

Our need to expand our circles of empathy has never been more urgent. We as a nation are becoming more diverse, segregated and isolated by the day.

The high school graduating class of 2020 will be majority minority and growing income inequality has created a vast income and opportunity gap. Our neighborhoods have regressed back to higher levels of socio-economic segregation; families from different sides of the track are living in increasing isolation from one another.

Photo courtesy of Flickr/Dean Hochman

These new empathy technologies are very different than social media platforms which once held so much promise to connect us all in an online utopia. The reality is that social media has moved us in the opposite direction. Instead, our platforms have us caught in an echo chamber of our own social filters, rarely exposed to new perspectives.

And it’s not just social media, clickbait tabloid journalism has encouraged mocking and judgment rather than the empathy-building journey of a great piece of writing like Toni Morrison or Donna Tartt. In the rich depth of literature, we empathize with the protagonist, and when their flaws are inevitably revealed, we are humbled and see ourselves in their complex, imperfect lives. Research has since proven that those who read more literary fiction are better at detecting and understanding others’ emotions.

What follows are several examples of empathy technologies in bricks and mortar schools, and online and corporate learning.

Empathy technologies enhance human connection rather than replacing it. Outschool is a marketplace for live online classes which connects K-12 students and teachers in small-groups over video-chat to explore shared interests. Historically online learning has offered great choice and access but at the cost of student engagement and human connection.

Outschool’s use of live video-chat and the small-group format removes the need for that trade-off. Kids and teachers see and hear each other, interacting in real-time like in a school classroom, but with participants from all over the world and from different backgrounds.

Live video chat on Outschool

The intentionally of curating a diverse library of content is a key difference between the new empathy technologies and social media. Newsela is a news platform delivering a bonanza of curated, leveled content to the classroom every day. It’s the antidote to the stale, single source textbook, refreshed once a decade. In the screenshot below, children are exposed to stories about Mexico, gun rights and Black women. Teachers often use Newsela articles as a jumping off point for a rich classroom discussion where respectful discourse skills are taught and practiced.

Newsela’s interface.

Business leaders are increasingly touting empathy as a critical leadership trait and using these technologies in their own corporate education programs for leadership and everyday employees. Google’s Sundar Pichai describes his management style as “the ability to trancend the work and work well with others.” Microsoft’s Satya Nadella believes that empathy is a key source of business innovation and is a prerequisite for one’s ability to “grasp customers’ un-met, unarticulated needs.” Uber’s new CEO Dara Khosrowshahi and Apple’s Tim Cook round out a cohort of leaders who are listeners first and contrast sharply to the stereotypical brash Silicon Valley CEO.

To deepen employees empathy, cutting edge corporations like Amazon are using virtual environments like Mursion to practice challenging interpersonal interactions. Mursion’s virtual simulations are powered by trained human actors who engage in real-time conversations with employees. I tried it out by role-playing a manager discussing mandatory overtime with a line worker who was struggling to keep two part-time jobs. The line worker described to me how last-minute overtime requests threw his schedule into chaos, put his second job at risk and impacted his childcare situation.

 


0

Juni Learning is bringing individualized programming tutorials to kids online

01:08 | 26 January

Juni Learning wants to give every kid access to a quality education in computer programming.

The company, part of Y Combinator’s latest batch of startups, is taking the same approach that turned VIPKID into the largest Chinese employer in the U.S. and a runaway hit in the edtech market — by matching students with vetted and pre-qualified online tutors.

While VIPKID focused on teaching English, Juni wants to teach kids how to code.

So far, the company has taught thousands of kids around the world how to code in Scratch and Python and offered instruction in AP Computer Science A, competition programming and overall web development.

Founded by Vivian Shen and Ruby Lee, Juni Learning was born of the two women’s own frustrations in learning how to code. While both eventually made their way to the computer science department at Stanford (where the two friends first met), it was a long road to get there.

Lee (class of 2013) and Shen (class of 2014) both had to fight to get their computer educations off the ground. Although Shen grew up in Palo Alto, Calif. — arguably ground zero for technology development in the Western world — there was only one computer science class on offer at her high school.

For Lee, who grew up in Massachusetts outside of Boston, the high school she attended was a computer science wasteland… with nothing on offer.

“As public awareness of women gaining engineering roles, we started discussing how to make education more accessible,” says Shen. “I was traveling in China and started hearing about these amazing education companies [like VIPKID].”

Indeed, Cindy Mi, VIPKID chief executive, was an inspiration for both women. “I thought, why couldn’t a model like that bring great computer science education to the U.S.,” Shen says.

The company offers different plans starting with an individual tutoring session running about $250 per month. Customers also can sign up for group classes that are capped at $160 per month.

“When you think about computer science education — since it’s such an important subject for kids — why aren’t they getting the best education they can get?” Shen asks.

The prices were set up after feedback with customers and exists at what Shen said was a sweet spot. VIPKID classes cost around $30 per hour.

The company initially had a soft launch in late August and was just accepted into the recent batch of Y Combinator companies.

Juni Learning recruits its tutors from current and former computer science students at top-tier colleges — primarily in California.

The company charges $250 per month for once-a-week classes and pays its teachers an undisclosed amount based on their previous experience teaching computer science to children.

The company’s product couldn’t come at a better time to reach students in both the U.S. and international markets… like China.

As The Wall Street Journal notes, investing in coding could be the next big thing for Chinese investors in education technology.

“Coding is about the only course that has the potential to become as important as English for students and for the industry,” Zhang Lijun, a venture capitalist who backs Chinese edtech companies, told the WSJ. “But nobody knows when that’s going to happen.”

By using a model that’s already proven successful in China — and resonates with consumers in the U.S., Juni Learning may have gotten ahead of the curve.

  1. cover_photo

  2. session_screenshot_2

    Juni Learning session screenshot
  3. session_screenshot

    Juni Learning session screenshot
 View Slideshow
Previous Next Exit

 


0

Evolve Foundation launches a $100 million fund to find startups working to relieve human suffering

01:58 | 4 November

It seems there’s nothing but bad news out there lately, but here’s some good news — the nonprofit Evolve Foundation has raised $100 million for a new fund called the Conscious Accelerator to combat loneliness, purposelessness, fear and anger spreading throughout the world though technology.

Co-founder of Matrix Partners China Bo Shao will lead the fund and will be looking for entrepreneurs focusing on tech that can help people become more present and aware.

“I know a lot of very wealthy people and many are very anxious or depressed,” he told TechCrunch. A lot of this he contributes to the way we use technology, especially social media networks.

“It becomes this anxiety-inducing activity where I have to think about what’s the next post I should write to get most people to like me and comment and forward,” he said. “It seems that post has you trapped. Within 10 minutes, you are wondering how many people liked this, how many commented. It was so addicting.”

Teens are especially prone to this anxiety, he points out. It turns out it’s a real mental health condition known as Social Media Anxiety Disorder (SMAD).

“Social media is the new sugar or the new smoking of this generation,” Shao told TechCrunch.

He quit social media in September of 2013 but tells TechCrunch he’s been on a journey to find ways to improve his life and others for the last 10 years.

His new fund, as laid out in a recent Medium post announcement, seeks to maximize the social good, find solutions to the issues now facing us through technology, not just investing in something with good returns.

Shao plans to use his background as a prominent VC in a multi-billion-dollar firm to find those working on the type of technology to make us less anxious and more centered.

The Conscious Accelerator has already funded a meditation app called Inside Timer. It’s also going to launch a parenting app to help parents raise their children to be resilient in an often confusing world.

He’s also not opposed to funding projects like the one two UC Berkeley students put together to identify Russian and politically toxic Twitter bots — something Twitter has been criticized for not getting a handle on internally.

“The hope is we will attract entrepreneurs who are conscious,” Shao said.

Featured Image: Yann Cœuru/Flickr UNDER A CC BY 2.0 LICENSE

 


0
<< Back Forward >>
Topics from 1 to 10 | in all: 17

Site search


Last comments

Walmart retreats from its UK Asda business to hone its focus on competing with Amazon
Peter Short
Good luck
Peter Short

Evolve Foundation launches a $100 million fund to find startups working to relieve human suffering
Peter Short
Money will give hope
Peter Short

Boeing will build DARPA’s XS-1 experimental spaceplane
Peter Short
Great
Peter Short

Is a “robot tax” really an “innovation penalty”?
Peter Short
It need to be taxed also any organic substance ie food than is used as a calorie transfer needs tax…
Peter Short

Twitter Is Testing A Dedicated GIF Button On Mobile
Peter Short
Sounds great Facebook got a button a few years ago
Then it disappeared Twitter needs a bottom maybe…
Peter Short

Apple’s Next iPhone Rumored To Debut On September 9th
Peter Short
Looks like a nice cycle of a round year;)
Peter Short

AncestryDNA And Google’s Calico Team Up To Study Genetic Longevity
Peter Short
I'm still fascinated by DNA though I favour pure chemistry what could be
Offered is for future gen…
Peter Short

U.K. Push For Better Broadband For Startups
Verg Matthews
There has to an email option icon to send to the clowns in MTNL ... the govt of India's service pro…
Verg Matthews

CrunchWeek: Apple Makes Music, Oculus Aims For Mainstream, Twitter CEO Shakeup
Peter Short
Noted Google maybe grooming Twitter as a partner in Social Media but with whistle blowing coming to…
Peter Short

CrunchWeek: Apple Makes Music, Oculus Aims For Mainstream, Twitter CEO Shakeup
Peter Short
Noted Google maybe grooming Twitter as a partner in Social Media but with whistle blowing coming to…
Peter Short