Прогноз погоды

People

John Smith

John Smith, 47

Joined: 28 January 2014

Interests: No data

Jonnathan Coleman

Jonnathan Coleman, 31

Joined: 18 June 2014

About myself: You may say I'm a dreamer

Interests: Snowboarding, Cycling, Beer

Andrey II

Andrey II, 40

Joined: 08 January 2014

Interests: No data

David

David

Joined: 05 August 2014

Interests: No data

David Markham

David Markham, 64

Joined: 13 November 2014

Interests: No data

Michelle Li

Michelle Li, 40

Joined: 13 August 2014

Interests: No data

Max Almenas

Max Almenas, 51

Joined: 10 August 2014

Interests: No data

29Jan

29Jan, 30

Joined: 29 January 2014

Interests: No data

s82 s82

s82 s82, 25

Joined: 16 April 2014

Interests: No data

Wicca

Wicca, 35

Joined: 18 June 2014

Interests: No data

Phebe Paul

Phebe Paul, 25

Joined: 08 September 2014

Interests: No data

Артем 007

Артем 007, 40

Joined: 29 January 2014

About myself: Таки да!

Interests: Норвегия и Исландия

Alexey Geno

Alexey Geno, 7

Joined: 25 June 2015

About myself: Хай

Interests: Интерес1daasdfasf

Verg Matthews

Verg Matthews, 67

Joined: 25 June 2015

Interests: No data

CHEMICALS 4 WORLD DEVEN DHELARIYA

CHEMICALS 4 WORLD…, 32

Joined: 22 December 2014

Interests: No data



Main article: DARPA

<< Back Forward >>
Topics from 1 to 10 | in all: 23

DARPA wants to teach and test ‘common sense’ for AI

01:18 | 12 October

It’s a funny thing, AI. It can identify objects in a fraction of a second, imitate the human voice, and recommend new music, but most machine “intelligence” lacks the most basic understanding of everyday objects and actions — in other words, common sense. DARPA is teaming up with the Seattle-based Allen Institute for Artificial Intelligence to see about changing that.

The Machine Common Sense program aims to both define the problem and engender progress on it, though no one is expecting this to be “solved” in a year or two. But if AI is to escape the prison of the hyper-specific niches where it works well, it’s going to need to grow a brain that does more than execute a classification task at great speed.

“The absence of common sense prevents an intelligent system from understanding its world, communicating naturally with people, behaving reasonably in unforeseen situations, and learning from new experiences. This absence is perhaps the most significant barrier between the narrowly focused AI applications we have today and the more general AI applications we would like to create in the future,” explained DARPA’s Dave Gunning in a press release.

Not only is common sense lacking in AIs, but it’s remarkably difficult to define and test, given how broad the concept is. Common sense could be anything from understanding that solid objects can’t intersect to the idea that the kitchen is where people generally go when they’re thirsty. As obvious as those things are to any human more than a few months old, they’re actually quite sophisticated constructs involving multiple concepts and intuitive connections.

It’s not just a set of facts (like that you must peel an orange before you eat it, or that a drawer can hold small items) but identifying connections between them based on what you’ve observed elsewhere. That’s why DARPA’s proposal involves building “computational models that learn from experience and mimic the core domains of cognition as defined by developmental psychology. This includes the domains of objects (intuitive physics), places (spatial navigation), and agents (intentional actors).”

But how do you test these things? Fortunately great minds have been at work on this problem for decades, and one research group has proposed an initial method for testing common sense that should work as a stepping stone to more sophisticated ones.

I talked with Oren Etzioni, head of the Allen Institute for AI, which has been working on common sense AI for quite a while now, among many other projects regarding the understanding and navigation of the real world.

“This has been a holy grail of AI for 35 years or more,” he said. “One of the problems is how to put this on an empirical footing. If you can’t measure it, how can you evaluate it? This is one of the very first times people have tried to make common sense measurable, and certainly the first time that DARPA has thrown their hat, and their leadership and funding, into the ring.”

The AI2 approach is simple but carefully calibrated. Machine learning models will be presented with written descriptions of situations and several short options for what happens next. Here’s one example:

On stage, a woman takes a seat at the piano. She
a) sits on a bench as her sister plays with the doll.
b) smiles with someone as the music plays.
c) is in the crowd, watching the dancers.
d) nervously sets her fingers on the keys.

The answer, as you and I would know in a heartbeat, is d. But the amount of context and knowledge that we put into finding that answer is enormous. And it’s not like the other options are impossible — in fact, they’re AI-generated to seem plausible to other agents but easily detectable by humans. This really is quite a difficult problem for a machine to solve, and current models are getting it right about 60 percent of the time (25 percent would be chance).

There are 113,000 of these questions, but Etzioni told me this is just the first dataset of several.

“This particular dataset is not that hard,” he said. “I expect to see rapid progress. But we’re going to be rolling out at least four more by the end of the year that will be harder.”

After all, toddlers don’t learn common sense by taking the GRE. As with other AI challenges, you want gradual improvements that generalize to harder versions of similar problems — for example, going from recognizing a face in a photo, to recognizing multiple faces, then identifying the expression on those faces.

There will be a proposers day next week in Arlington for any researcher who wants a little face time with the people running this little challenge, after which there will be a partner selection process, and early next year the selected groups will be able to submit their models for evaluation by AI2’s systems in the spring.

The common sense effort is part of DARPA’s big $2 billion investment in AI on multiple fronts. But they’re not looking to duplicate or compete with the likes of Google, Amazon, and Baidu, which have invested heavily in the narrow AI applications we see on our phones and the like.

“They’re saying, what are the limitations of those systems? Where can we fund basic research that will be the basis of whole new industries?” Etzioni suggested. And of course it is DARPA and government investment that set the likes of self-driving cars and virtual assistants on their first steps. Why shouldn’t it be the same for common sense?

 


0

DARPA announces $2B investment in AI

02:01 | 8 September

At a symposium in Washington DC on Friday, DARPA  href="https://www.darpa.mil/news-events/2018-09-07" target="_blank" rel="noopener">announced plans to invest $2 billion in artificial intelligence research over the next five years.

In a program called “AI Next,” the agency now has over 20 programs currently in the works and will focus on “enhancing the security and resiliency of machine learning and AI technologies, reducing power, data, performance inefficiencies and [exploring] ‘explainability'” of these systems.

“Machines lack contextual reasoning capabilities, and their training must cover every eventuality, which is not only costly, but ultimately impossible,” said director Dr. Steven Walker. “We want to explore how machines can acquire human-like communication and reasoning capabilities, with the ability to recognize new situations and environments and adapt to them.”

Artificial intelligence is a broad term that can encompass everything from intuitive search features to true machine learning, and all definitions rely heavily on consuming data to inform their algorithms and “learn.” DARPA has a long history of research and development in this space, but has recently seen its efforts surpassed by foreign powers like China, who earlier this summer announced plans to become an AI leader by 2030.

In many cases these AI are still in their infancy, but the technology — especially machine learning — has the potential to completely transform not only how users interact with their own technology but how corporate and governmental institutions use this technology to interact with their employees and citizens.

One particular concern with machine learning is the potential bias that can be incorporated into these systems as a result of the data they consume during training. If the data contains holes or misinformation, the machines can come to incorrect conclusions — such as which individuals are “more likely” to commit crimes — that can have devastating consequences. And, even more frighteningly, when organically coming to these conclusions the “learning” a machine is obscured in something called a black box.

In other words, even the researchers who design the algorithms can’t quite know how machines are reaching their conclusions.

That said, when handled with care and forethought, AI research can be a powerful source of innovation and advancement as well. As DARPA moves forward with its research, we will see how they handle these important technical and societal questions.

 


0

DARPA dedicates $75 million (to start) into reinventing chip tech

15:37 | 24 July

The Defense Department’s research arm, DARPA, is throwing a event around its “Electronics Resurgence Initiative,” an effort to leapfrog existing chip tech by funding powerful but unproven new ideas percolating in the industry. It plans to spend up to $1.5 billion on this over the years, of which about $75 million was earmarked today for a handful of new partners.

The ERI was announced last year in relatively broad terms, and since then it has solicited proposals from universities and research labs all over the country, arriving at a handful that it has elected to fund.

The list of partners and participants is quite long: think along the lines of MIT, Stanford, Princeton, Yale, the UCs, IBM, Intel, Qualcomm, National Labs, and so on. Big hitters. Each institution is generally associated with one of six sub-programs, each (naturally) equipped with their own acronym:

  • Software-defined Hardware (SDH) — Computing is often done on general-purpose processors, but specialized ones can get the job done faster. Problem is these “application specific integrated circuits,” or ASICs, are expensive and time-consuming to create. SDH is about making “hardware and software that can be reconfigured in real-time based on the data being processed.”
  • Domain-specific System on Chip (DSSoC) — This is related to SDH, but is about finding the right balance between custom chips, for instance or image recognition or message decryption, and general-purpose ones. DSSoC aims to create a “single programmable framework” that would let developers easily mix and match parts like ASICs, CPUs, and GPUs.
  • Intelligent Design of Electronic Assets (IDEA) — On a related note, creating such a chip’s actual physical wiring layout is an incredibly complex and specialized process. IDEA is looking to shorten the time it takes to design a chip from a year to a day, “to usher in an era of the 24-hour design cycle for DoD hardware systems.” Ideally no human would be necessary, though doubtless specialists would vet the resulting designs.
  • Posh Open Source Hardware (POSH) — This self-referential acronym refers to a program where specialized SoCs like those these programs are looking into would be pursued under open source licenses. Licensing can be a serious obstacle to creating the best system possible — one chip may use a proprietary system that can’t exist in concert with another chip’s proprietary system — so to enable reuse and easy distribution they’ll look into creating and testing a base set that have no such restrictions.
  • 3-Dimensional Monolithic System-on-a-chip (3DSoC) — The standard model of having processors and chips connected to a central memory and execution system can lead to serious bottlenecks. So 3DSoC aims to combine everything into stacks (hence the 3D part) and “integrate logic, memory and input-output (I/O) elements in ways that dramatically shorten — more than 50-fold — computation times while using less power. The 50-fold number is, I’m guessing, largely aspirational.
  • Foundations Required for Novel Compute (FRANC) — That “standard model” of processor plus short term and long term memory is known as a Von Neumann architecture, after one of the founders of computing technology and theory, and is how nearly all computing is done today. But DARPA feels it’s time to move past this and create “novel compute topologies” with “new materials and integration schemes to process data in ways that eliminate or minimize data movement.” It’s rather sci-fi right now as you can tell but if we don’t try to escape Von Neumann, he will dominate us forever.

These are all extremely ambitious ideas, as you can see, but don’t think about it like DARPA contracting these researchers to create something useful right away. The Defense Department is a huge supporter of basic science; I can’t tell you how many papers I read where the Air Force, DARPA, or some other quasi-military entity has provided the funding. So think of it as trying to spur American innovation in important areas that also may happen to have military significance down the line.

A DARPA representative explained that $75 million is set aside for funding various projects under these headings, though the specifics are known only to the participants at this point. That’s the money just for FY18, and presumably more will be added according to the merits and requirements of the various projects. That all comes out of the greater $1.5 billion budget for the ERI overall.

The ERI summit is underway right now, with participants and DARPA reps sharing information, comparing notes, and setting expectations. The summit will no doubt repeat next year when a bit more work has been done.

 


0

DARPA design shifts round wheels to triangular tracks in a moving vehicle

05:20 | 26 June

As part of its Ground X-Vehicle Technologies program, DARPA is showcasing some new defense vehicle tech that’s as futuristic as it is practical. One of the innovations, a reconfigurable wheel-track, comes out of Carnegie Mellon University’s National Robotics Engineering Center in partnership with DARPA. The wheel-track is just one of a handful of designs meant to improve survivability of combat vehicles beyond just up-armoring them.

As you can see in the video, the reconfigurable wheel-track demonstrates a seamless transition between a round wheel shape and a triangular track in about two seconds and the shift between its two modes can be executed while the vehicle is in motion without cutting speed. Round wheels are optimal for hard terrain while track-style treads allow an armored vehicle to move freely on softer ground.

According to Ground X-Vehicle Program Manager Major Amber Walker, the tech offers “instant improvements to tactical mobility and maneuverability on diverse terrains” — an advantage you can see on display in the GIF below.

While wheel technology doesn’t sound that exciting, the result is visually impressive and smooth enough to prompt a double-take.

The other designs featured in the video are noteworthy as well, with one offering a windowless navigation technology called Virtual Perspectives Augmenting Natural Experiences (V-PANE) that integrates video from an array of mounted LIDAR and video cameras to recreate a realtime model of a windowless vehicle’s surroundings. Another windowless cockpit design creates “virtual windows” for a driver, with 3D goggles for depth enhancement, head-tracking and wraparound window display screens displaying data outside the all-terrain vehicle in realtime.

 


0

DARPA is funding new tech that can identify manipulated videos and ‘deepfakes’

04:27 | 1 May

The Menlo Park-based nonprofit research group SRI International has been awarded three contracts by the Pentagon’s Defense Advanced Research Projects Agency (DARPA) to wage war on the newest front in fake news. Specifically, DARPA’s Media Forensics program is developing tools capable of identifying when videos and photos have been meaningfully altered from their original state in order to misrepresent their content.

The most infamous form of this kind of content is the category called “deepfakes” — usually pornographic video that superimposes a celebrity or public figure’s likeness into a compromising scene. Though software that makes that makes deepfakes possible is inexpensive and easy to use, existing video analysis tools aren’t yet up to the task of identifying what’s real and what’s been cooked up.

As articulated by its mission statement, that’s where the Media Forensics group comes in:

“DARPA’s MediFor program brings together world-class researchers to attempt to level the digital imagery playing field, which currently favors the manipulator, by developing technologies for the automated assessment of the integrity of an image or video and integrating these in an end-to-end media forensics platform.

If successful, the MediFor platform will automatically detect manipulations, provide detailed information about how these manipulations were performed, and reason about the overall integrity of visual media to facilitate decisions regarding the use of any questionable image or video.”

While video is a particularly alarming application, manipulation even poses a detection challenge for still images and DARPA is researching those challenges as well.

DARPA’s Media Forensics group, also known as MediFor, began soliciting applications in 2015, launched in 2016 and is funded through 2020. For the project, SRI International will work closely with researchers at the University of Amsterdam (see their paper “Spotting Audio-Visual Inconsistencies (SAVI) in Manipulated Video” for more details) and the Biometrics Security & Privacy group of the Idiap Research Institute in Switzerland. The research group is focusing on four techniques to identify the kind of audiovisual discrepancies present in a video that has been tampered with, including lip sync analysis, speaker inconsistency detection, scene inconsistency detection (room size and acoustics) and identifying frame drops or content insertions.

Research awarded through the program is showing promise. In an initial round of testing last June, researchers were able to identify “speaker inconsistencies and scene inconsistencies,” two markers of video that’s been tampered with, with 75% accuracy in a set of hundreds of test videos. In May 2018, the group will be conducting a similar test on a larger scale, honing its technique in order to examine a much larger sample of test videos.

While the project does have potential defense applications, the research team believes that the aims of the program will become “front-and-center” in the near future as regulators, the media and the public alike reckon with the even more insidious strain of fake news.

“We expect techniques for tampering with and generating whole synthetic videos to improve dramatically in the near term,” a representative of SRI International told TechCrunch.

“These techniques will make it possible for both hobbyists and hackers to generate very realistic-looking videos of people doing and saying things they never did.”

 


0

DARPA’s Launch Challenge offers $10M prize for short-notice, rapid-turnaround rocketry

03:16 | 19 April

Getting to space is already tough, but getting there on short notice and then doing it again a couple weeks later? That’s a big ask. Nevertheless, DARPA is asking it as part of its Launch Challenge, announced today at the 34th Space Symposium in Colorado. Teams must take a payload to space with only days to prepare, then do it again soon after — if they want to win the $10M grand prize.

The idea is to nurture small space companies under what DARPA envisions as the future of launch conditions in both commercial and military situations. The ability to adapt to rapidly changing circumstances or fail gracefully if not will be critical in the launch ecosystem of the near future.

Here’s how it will go down. First, teams will have to pre-qualify to show they have the chops to execute this kind of task via a written explanation of their capabilities and the acquisition of a license to launch. Qualifying teams will be rewarded with $400,000 each.

Once a set of teams is established (applications close in December), DARPA will bide its time… and then spring the launches on them sometime in the second half of 2019.

How big is the payload? Does it need to be powered? Cooled? Does it need or provide data? All this will be a mystery until mere weeks before launch. For comparison, most launches are planned for years and only finalized months before the day. DARPA will, however, provide an “example orbit” earlier in 2019 so you have a general idea of what to expect.

Teams won’t even know where they’re launching from until just before. “Competitors should assume any current or future FAA-licensed spaceport may be used. Launch site services are planned to be austere — primarily a concrete pad with bolt-down fixtures and generator or shore power.” Basically, be ready to rough it.

Any team that successfully inserts the payload to the correct low-earth orbit will receive $2 million. But they won’t be able to rest on their laurels: the next launch, with similarly mysterious conditions, will take place within two weeks of the first.

Teams that get their second payload into orbit correctly qualify for the grand prize — they’ll be ranked by “mass, time, and accuracy.” First place takes home $10M, second place $9M, and third place $8M. Not bad.

More information will be available come May 23, when DARPA will host a meeting and Q&A. In the meantime, you can read the contest rules summary here (PDF), and if you happen to be a rocket scientist or the head of a commercial space outfit, you can register for the challenge here.

 


0

DARPA wants new ideas for autonomous drone swarms

20:18 | 1 April

The Defense Department’s research wing is serious about putting drones into action, not just one by one but in coordinated swarms. The Offensive Swarm-Enabled Tactics program is kicking off its second “sprint,” a period of solicitation and rapid prototyping of systems based around a central theme. This spring sprint is all about “autonomy.”

The idea is to collect lots of ideas on how new technology, be it sensors, software, or better propeller blades, can enhance the ability of drones to coordinate and operate as a collective.

Specifically, swarms of 50 will need to “isolate an urban objective” within half an hour or so by working together with each other and ground-based robot. That at least is the “operational backdrop” that should guide prospective entrants in their decision whether their tech is applicable.

So a swarm of drones that seed a field faster than a tractor, while practical for farmers, isn’t really something the Pentagon is interested in here. On the other hand, if you can sell that idea as a swarm of drones dropping autonomous sensors on an urban battlefield, they might take a shine to it.

But you could also simply demonstrate how using a compact ground-based lidar system could improve swarm coordination at low cost and without using visible light. Or maybe you’ve designed a midair charging system that lets a swarm perk up flagging units without human intervention.

Those are pretty good ideas, actually — maybe I’ll run them by the program manager, Timothy Chung, when he’s on stage at our Robotics event in Berkeley this May. Chung also oversees the Subterranean Challenge and plenty more at DARPA . He looks like he’s having a good time in the video explaining the ground rules of this new sprint:

You don’t have to actually have 50 drones to take part — there are simulators and other ways of demonstrating value. More information on the program and how to submit your work for consideration can be found at the FBO page.

 


0

Check out self-driving cars, DARPA and more at TC Sessions: Robotics May 11 at UC Berkeley

19:58 | 26 March

As we’re gearing up for May’s big show, the announcements are starting to come fast and furious. In the past two weeks, we’ve revealed that Andy Rubin, Marc Raibert, Melonee Wise, Robert Full and more will be joining us May 11 at UC Berkeley’s Zellerbach Hall.

We’ll be unveiling the full schedule for the event in the coming days, but for now, we’ve got a couple of new names to share with you, showing the best and brightest from across a wide range of robotics categories, from self-driving cars, to DARPA, to human-robotic interaction.

Diving Deep with Driverless Cars

Chris Urmson has been deeply involved with autonomous vehicles for more than a decade. In 2007, his team at Carnegie Mellon won the 2007 DARPA Urban Grand Challenge for self-driving cars. Two years later, he joined Google/Alphabet’s self-driving car team, eventually taking over as project lead.

These days, Ursom is the CEO of Aurora Innovation, an autonomous car company he cofounded with Tesla alum Sterling Anderson. The Bay Area-based company has been building systems for Volkswagen and Hyundai, and announced a partnership with NVIDIA earlier this year. Ursom will be joining us to discuss the promises — and pitfalls — of autonomous vehicles.

DARPA’s Latest Challenge

Launching a robotics company is challenging — and expensive. Thankfully, DARPA has helped play a key role in helping a number of important robotics startups get off the ground. Much of that funding has come courtesy of various DARPA Challenges, like the Subterranean Challenge launched late last year.

DARPA Tactical Technology Office program manager Timothy Chung will be on-hand at the event to lead a session exploring the department’s latest challenge, which seeks to “rapidly map, navigate, and search underground environments.”

Human and Robots: Can’t We Just Get Along?

As robots and humans increasingly share spaces and overlap in capability, ensuring safe and efficient interactions grows more important. What new technologies and methods will enable them, and what challenges lie ahead for human-robot relations?

We’ll be joined by some top researchers in the field, including Ayanna Howard of Georgia Tech and Leila Takayama of UC Santa Cruz to explore these challenges and more.

We Want to Hear From Your Robotics Company

It wouldn’t be a real TechCrunch event without a good, old-fashioned startup pitch. As we mentioned last time, we’re searching for four early-stage robotics startups to show off their goods for our panel of VCs and a crowd of students and roboticists. If your company has what it takes, you can apply here.

We’re also looking for companies to participate in demos and serve as subjects for some upcoming TechCrunch videos. If that sounds like a good fit, fill out this form here.

Early-bird tickets are on sale now. (Special 90 percent discount for students when you book here!)

If you’re interested in a sponsorship, contact us.

 


0

LiDAR autonomous sensor startup Ouster announces $27M Series A led by auto powerhouse Cox Enterprises

20:00 | 11 December

Angus Pacala has had a lifelong passion for autonomous cars going all the way back to high school. A little more than a decade ago, he followed the launch of the DARPA Grand Challenge, a Department of Defense competition that pitted research teams against each other over who could build the best autonomous car. Stanford won the challenge in 2005, which is “one of the reasons I went there,” Pacala said.

His freshman year, he met Mark Frichtl, who was similarly interested in autonomous cars. The two took classes and worked on problem sets together, eventually working with each other at Quanergy, which Pacala had co-founded. Now, they are putting their collective talents together in a new venture called Ouster to bring affordable LiDAR sensors to the world.

Ouster today announced a $27 million series A fundraise led by Cox Enterprises, whose Cox Automotive division owns and offers a variety of auto services including the well-known Kelley Blue Book and AutoTrader.com.

Few areas of research are as important to the viability of fully autonomous cars as sensors — the actual physical hardware which evaluates the space around a vehicle and provides the raw data for machine learning algorithms to control the car without a human driver.

While visible light cameras, radar, and infrared sensors have been used by various engineering teams to build a physical map around a vehicle, the key component for nearly all autonomous car platforms is LiDAR. As Devin described on TechCrunch in an overview earlier this year, the technology, which has been around for decades, has become one of the key linchpins to successfully building L4 and L5 fully-autonomous vehicles.

There is just one challenge: essentially only one company makes the technology for production scale — Velodyne, which is based in the Valley. The company announced just a few weeks ago that it is quadrupling production of its main LiDAR product due to demand from autonomous car manufacturers. However, the prices for many of its sensors remain out-of-reach for most consumer applications, with some of the company’s most advanced sensors costing tens of thousands of dollars.

For Pacala, that price barrier has been a major challenge and ultimately gets at the mission of Ouster. “Our long term vision is to push LiDAR from being a research product to being in every consumer automobile,“ he said.

Ouster hopes that its first product, a 64-channel LiDAR sensor called OS1, which will be priced at $12,000, is that solution. The company says that the product is dramatically lighter, smaller, and uses less power than other competitors. It’s also shipping now.

Improving the performance of the sensor while also lowering its sticker price wasn’t a simple challenge. Pacala emphasized that technology wasn’t the entire solution. “While we can talk about the nitty-gritty of technology, the other side is not just the fundamental technology, but the design for manufacturability that really makes this lower cost while maintaining the performance” of the sensor.

That’s one of the reasons why he chose the venture partners that he did. “This isn’t just a typical list of Sand Hill investors. There is a time and place for those sort of investors, but we saw an opportunity to expand our reach by having investors who are much more attuned to the auto industry,” Pacala said. “VCs that are located in Detroit and who talk to OEMs day in and day out” had more to offer the company at this stage.

Ouster is setting an aggressive timeline to scale out its manufacturing. It intends to ramp us production heavily in the new year, targeting a thousand units a month in January and getting to ten thousand units a month by the end of June.

Certainly speed is of the essence. Venture capitalists around the world have been heavily funding automotive sensing technology over the past two years, including large rounds into Valley-based Luminar, Israel-based Oryx Vision, and China-based Hesai. But Pacala is sanguine about the company’s chances. “We have delivered first and then talked second, which is why you haven’t heard anything about us until now.” He hopes the heavy emphasis on getting manufacturing right early on will give him a lead in the race for your automobile.

In addition to Cox Enterprises, Fontinalis, Amity Ventures, Constellation Technology Ventures, Tao Capital Partners, and Carthona Capital also participated in the fundraise.

Featured Image: Ouster

 


0

This tiny sensor could sleep for years between detection events

04:48 | 12 September

It’s easy enough to put an always-on camera somewhere it can live off solar power or the grid, but deep in nature, underground, or in other unusual circumstances every drop of power is precious. Luckily, a new type of sensor developed for DARPA uses none at all until the thing it’s built to detect happens to show up. That means it can sit for years without so much as a battery top-up.

The idea is that you could put a few of these things in, say, the miles of tunnels underneath a decommissioned nuclear power plant or a mining complex, but not have to wire them all for electricity. But as soon as something appears, it’s seen and transmitted immediately. The power requirements would have to be almost nil, of course, which is why DARPA called the program Near Zero Power RF and Sensor Operation.

A difficult proposition, but engineers at Northeastern University were up to the task. They call their work a “plasmonically-enhanced micromechanical photoswitch,” which pretty much sums it up. I could end the article right here. But for those of you who slept in class the day we covered that topic, I guess I can explain.

The sensor is built to detect infrared light waves, invisible to our eyes but still abundant from heat sources like people, cars, fires, and so on. But as long as none are present, it is completely powered off.

But when a ray does appear, it strikes a surface is covered in tiny patches that magnify its effect. Plasmons are a sort of special behavior of conducting material, which in this case respond to the IR waves by heating up.

Here you can see the actual gap that gets closed by the heating of the element (lower left).

“The energy from the IR source heats the sensing elements which, in turn, causes physical movement of key sensor components,” wrote DARPA’s program manager, Troy Olsson, in a blog post. “These motions result in the mechanical closing of otherwise open circuit elements, thereby leading to signals that the target IR signature has been detected.”

Think of it like a paddle in a well. It can sit there for years without doing a thing, but as soon as someone drops a pebble into the well, it hits the paddle, which spins and turns a crank, which pulls a string, which raises a flag at the well-owner’s house. Except, as Olsson further explains, it’s a little more sophisticated.

“The technology features multiple sensing elements—each tuned to absorb a specific IR wavelength,” he wrote. “Together, these combine into complex logic circuits capable of analyzing IR spectrums, which opens the way for these sensors to not only detect IR energy in the environment but to specify if that energy derives from a fire, vehicle, person or some other IR source.”

The “unlimited duration of operation for unattended sensors deployed to detect infrequent but time-critical events,” as the researchers describe it, could have plenty of applications beyond security, of course: imagine popping a few of these all over the forests to monitor the movements of herds, or in space to catch rare cosmic events.

The tech is described in a paper published today in Nature Nanotechnology.

Featured Image: DARPA / Northeastern University

 


0
<< Back Forward >>
Topics from 1 to 10 | in all: 23

Site search


Last comments

Walmart retreats from its UK Asda business to hone its focus on competing with Amazon
Peter Short
Good luck
Peter Short

Evolve Foundation launches a $100 million fund to find startups working to relieve human suffering
Peter Short
Money will give hope
Peter Short

Boeing will build DARPA’s XS-1 experimental spaceplane
Peter Short
Great
Peter Short

Is a “robot tax” really an “innovation penalty”?
Peter Short
It need to be taxed also any organic substance ie food than is used as a calorie transfer needs tax…
Peter Short

Twitter Is Testing A Dedicated GIF Button On Mobile
Peter Short
Sounds great Facebook got a button a few years ago
Then it disappeared Twitter needs a bottom maybe…
Peter Short

Apple’s Next iPhone Rumored To Debut On September 9th
Peter Short
Looks like a nice cycle of a round year;)
Peter Short

AncestryDNA And Google’s Calico Team Up To Study Genetic Longevity
Peter Short
I'm still fascinated by DNA though I favour pure chemistry what could be
Offered is for future gen…
Peter Short

U.K. Push For Better Broadband For Startups
Verg Matthews
There has to an email option icon to send to the clowns in MTNL ... the govt of India's service pro…
Verg Matthews

CrunchWeek: Apple Makes Music, Oculus Aims For Mainstream, Twitter CEO Shakeup
Peter Short
Noted Google maybe grooming Twitter as a partner in Social Media but with whistle blowing coming to…
Peter Short

CrunchWeek: Apple Makes Music, Oculus Aims For Mainstream, Twitter CEO Shakeup
Peter Short
Noted Google maybe grooming Twitter as a partner in Social Media but with whistle blowing coming to…
Peter Short