Прогноз погоды


John Smith

John Smith, 47

Joined: 28 January 2014

Interests: No data

Jonnathan Coleman

Jonnathan Coleman, 31

Joined: 18 June 2014

About myself: You may say I'm a dreamer

Interests: Snowboarding, Cycling, Beer

Andrey II

Andrey II, 40

Joined: 08 January 2014

Interests: No data



Joined: 05 August 2014

Interests: No data

David Markham

David Markham, 64

Joined: 13 November 2014

Interests: No data

Michelle Li

Michelle Li, 40

Joined: 13 August 2014

Interests: No data

Max Almenas

Max Almenas, 52

Joined: 10 August 2014

Interests: No data


29Jan, 30

Joined: 29 January 2014

Interests: No data

s82 s82

s82 s82, 25

Joined: 16 April 2014

Interests: No data


Wicca, 35

Joined: 18 June 2014

Interests: No data

Phebe Paul

Phebe Paul, 25

Joined: 08 September 2014

Interests: No data

Артем 007

Артем 007, 40

Joined: 29 January 2014

About myself: Таки да!

Interests: Норвегия и Исландия



Joined: 27 November 2018

Interests: No data

Alexey Geno

Alexey Geno, 7

Joined: 25 June 2015

About myself: Хай

Interests: Интерес1daasdfasf

Verg Matthews

Verg Matthews, 67

Joined: 25 June 2015

Interests: No data

Main article: Column

<< Back Forward >>
Topics from 1 to 10 | in all: 4300

SoftBank’s Vision Fund inches closer to $100B

19:47 | 8 December

Jason Rowley Contributor
Jason Rowley is a venture capital and technology reporter for Crunchbase News.

Much has been said about the SoftBank Vision Fund (SBVF), mostly in awe of the size of the investment vehicle.

It’s important to remember that the $100 billion number most often associated with the gargantuan fund is only a target. Today, however, the Vision Fund inched yet closer to that 12-figure goal as it continues to pour billions of dollars into technology companies around the world.

So far in 2018 the SoftBank Vision Fund has invested in more than 20 deals, accounting for over $21 billion in total investment. That sum didn’t all come from the Vision Fund of course — SoftBank’s Vision Fund typically invests alongside one or more syndicate partners who help fill out bigger rounds — but the amounts are nonetheless staggering. The chart below shows the Vision Fund’s investments since its inception in 2017.

In an annual Form D disclosure filed with the Securities and Exchange Commission this morning, SBVF disclosed that it has raised a total of approximately $98.58 billion from 14 investors since the date of first sale on May 20, 2017. The annual filing from last year said there was roughly $93.15 billion raised from 8 investors, meaning that the Vision Fund has raised $5.43 billion in the past year and added six new investors to its limited partner base.

In a financial report from November, SoftBank Group Corp disclosed (p. 21, Note 1) it has invested an additional $5 billion in the fund, which is “intended for the installment of an incentive scheme for operations of SoftBank Vision Fund.” It brings SoftBank’s total contribution to $21.8 billion, in line with original targets.

The most recent Form D also cites six more limited partners. Crunchbase News presumes that the $430 million in new capital we cannot source back to SoftBank came from those new partners. SoftBank declined to comment on who they are.

Uncertainty looms over Vision Fund 2

One of the primary challenges an investor as big as the Vision Fund faces is sourcing capital. SoftBank doesn’t have a lot of choice about who it can take on as limited partners. To fill out a $100 billion fund (or something larger), government-backed investors are some of the only market participants with the financial wherewithal to anchor its limited partner base. And, sometimes, international politics and venture finance collide.

Saudi Arabia’s Public Investment Fund committed $45 billion to the SBVF; it’s the single biggest backer of the fund. Saudi Crown Prince Mohammed bin Salman is implicated in the extrajudicial torture, murder, dismemberment and disposal of Saudi dissident and Washington Post columnist Jamal Khashoggi in early October.

In November, TechCrunch reported that SoftBank would wait for the outcome of Khashoggi’s murder investigation before it decides on Vision Fund 2. New revelations this weekend close the window of reasonable doubt around bin Salman’s involvement in the murder.

This past weekend, The Wall Street Journal reported that the U.S. Central Intelligence Agency intercepted 11 messages sent between bin Salman and one of his closest aides, who allegedly oversaw the execution squad, in the hours before Khashoggi’s death. Amid mounting international and intelligence community consensus, though, the White House continues to defend Saudi Arabia.

Given these recent developments, it’s uncertain how SoftBank’s relationship with the Vision Fund’s principal backer will change going forward. Whether anything changes at all is itself an unknown at this point too.

SoftBank COO Marcelo Claure said there was “no certainty” of a follow-up fund back in mid-October.



Nigerian logistics startup Kobo360 raises $6M, expands in Africa

17:05 | 7 December

Jake Bright Contributor
Jake Bright is a writer and author in New York City. He is co-author of The Next Africa.

Nigerian trucking logistics startup Kobo360 has raised $6 million to upgrade its platform and expand operations to Ghana, Togo, and Cote D’Ivoire.

The company — with an Uber -like app that connects truckers and companies with freight needs — gained the equity financing in an IFC led investment. The funding saw participation from others, including TLcom Capital and Y Combinator.

With the investment Kobo360 aims to become more than a trucking transit app.

“We started off as an app, but our goal is to build a global logistics operating system. We’re no longer an app, we’re a platform,” founder Obi Ozor told TechCrunch.

In addition to connecting truckers, producers and distributors, the company is building that platform to offer supply chain management tools for enterprise customers.

“Large enterprises are asking us for very specific features related to movement, tracking, and sales of their goods. We either integrate other services, like SAP, into Kobo or we build those solutions into our platform directly,” said Ozor.

Kobo360 will start by developing its API and opening it up to large enterprise customers.

“We want clients to be able to use our Kobo dashboard for everything; moving goods, tracking, sales, and accounting…and tackling their challenges,” said Ozor.

Kobo360 will also build more physical presence throughout Nigeria to service its business. “We’ll open 100 hubs before the end of 2019…to be able to help operations collect proof of delivery, to monitor trucks on the roads, and have closer access to truck owners for vehicle inspection and training,” said Ozor.

Kobo360 will add more warehousing capabilities, “to support our reverse logistics business”—one of the ways the company brings prices down by matching trucks with return freight after they drop their loads, rather than returning empty, according to Ozor.

Kobo360 will also use its $6 million investment to expand programs and services for its drivers, something Ozor sees as a strategic priority.

“The day you neglect your drivers you are not going to have a company, only issues. If Uber were more driver focused it would be a trillion dollar company today,” he said.

The startup offers drivers training and group programs on insurance, discounted petrol, and vehicle financing (KoboWin). Drivers on the Kobo360 app earn on average approximately $5000 per month, according to Ozor.

Under KoboCare, Kobo360 has also created an HMO for drivers and an incentive based program to pay for education. “We give school fee support, a 5000 Naira bonus per trip for drivers toward educational expenses for their kids,” said Ozor.

Kobo360 will complete limited expansion into new markets Ghana, Togo, and Cote D’Ivoire in 2019. “The expansion will be with existing customers, one in the port operations business, one in FMCG, and another in agriculture,” said Ozor

Ozor thinks the startup’s asset-free, digital platform and business model can outpace traditional long-haul 3PL providers in Nigeria by handling more volume at cheaper prices.

“Owning trucks is just too difficult to manage. The best scalable model is to aggregate trucks,” he told TechCrunch in a previous interview.

With the latest investment, IFC’s regional head for Africa Wale Ayeni and TLcom senior partner Omobola Johnson will join Kobo360’s board. “There’s a lot of inefficiencies in long-haul freight in Africa…and they’re building a platform that can help a lot of these issues,” said Ayeni of Kobo360’s appeal as an investment.

The company has served 900 businesses, aggregated a fleet of 8000 drivers and moved 155 million kilograms, per company stats. Top clients include Honeywell, Olam, Unilever, Dangote, and DHL.

MarketLine estimated the value of Nigeria’s transportation sector in 2016 at $6 billion, with 99.4 percent comprising road freight.

Logistics has become an active space in Africa’s tech sector with startup entrepreneurs connecting digital to delivery models. In Nigeria, Jumia founder Tunde Kehinde departed and founded 

. Startup Max.ng is wrapping an app around motorcycles as an e-delivery platform. Nairobi-based Lori Systems has moved into digital coordination of trucking in East Africa. And U.S.-based Zipline—who launched drone delivery of commercial medical supplies in partnership with the government of Rwanda and support of UPS—and is in “process of expanding to several other countries,” according to a spokesperson.

Kobo360 has plans for broader Africa expansion but would not name additional countries yet.

Ozor said the company is profitable, though the startup does not release financial results. Wale Ayeni also wouldn’t divulge revenue figures, but confirmed IFC’s did full “legal and financial due diligence on Kobo’s stats,” as part of the investment.

Ozor named Lori Systems as Kobo360’s closest African startup competitor.

On the biggest challenge to revenue generation, it’s all about service delivery and execution, according to Ozor.

“We already have volume and demand in the market. The biggest threat to revenues is if Kobo360’s platform doesn’t succeed in solving our client’s problems and bringing reliability to their needs,” he said.



7 things to think about voice

02:00 | 7 December

Tom Goodwin Contributor
Tom Goodwin is EVP, head of innovation at Zenith Media and the co-founder of the Interesting People in Interesting Times event series and podcast.

The next few years will see voice automation take over many aspects of our lives. Although voice won’t change everything, it will be part of a movement that heralds a new way to think about our relationship with devices, screens, our data and interactions.

We will become more task-specific and less program-oriented. We will think less about items and more about the collective experience of the device ecosystem they are part of. We will enjoy the experiences they make possible, not the specifications they celebrate.

In the new world I hope we relinquish our role from the slaves we are today to be being back in control.

Voice won’t kill anything

The standard way that technology arrives is to augment more than replace. TV didn’t kill the radio. VHS and then streamed movies didn’t kill the cinema. The microwave didn’t destroy the cooker.

Voice more than anything else is a way for people to get outputs from and give inputs into machines; it is a type of user interface. With UI design we’ve had the era of punch cards in the 1940s, keyboards from the 1960s, the computer mouse from the 1970s and the touchscreen from the 2000s.

All four of these mechanisms are around today and, with the exception of the punch card, we freely move between all input types based on context. Touchscreens are terrible in cars and on gym equipment, but they are great at making tactile applications. Computer mice are great to point and click. Each input does very different things brilliantly and badly. We have learned to know what is the best use for each.

Voice will not kill brands, it won’t hurt keyboard sales or touchscreen devices — it will become an additional way to do stuff; it is incremental, not cannibalistic.

We need to design around it

Nobody wanted the computer mouse before it was invented. In fact, many were perplexed by it because it made no sense in the previous era, where we used command lines, not visual icons, to navigate. Working with Nokia on touchscreens before the iPhone, the user experience sucked because the operating system wasn’t designed for touch. 3D Touch still remains pathetic because few software designers got excited by it and built for it.

What is exciting about voice is not using ways to add voice interaction to current systems, but considering new applications/interactions/use cases we’ve never seen.

At the moment, the burden is on us to fit around the limitations of voice, rather than have voice work around our needs.

A great new facade

Have you ever noticed that most company desktop websites are their worst digital interface; their mobile site is likely better and the mobile app will be best. Most airline or hotel or bank apps don’t offer pared-down experiences (like was once the case), but their very fastest, slickest experience with the greatest functionality. What tends to happen is that new things get new cap ex, the best people and the most ability to bring change.

However, most digital interfaces are still designed around the silos, workflows and structures of the company that made them. Banks may offer eight different ways to send money to someone or something based around their departments; hotel chains may ask you to navigate by their brand of hotel, not by location.

The reality is that people are task-oriented, not process-oriented. They want an outcome and don’t care how. Do I give a crap if it’s Amazon Grocery or Amazon Fresh or Amazon Marketplace? Not one bit. Voice allows companies to build a new interface on top of the legacy crap they’ve inherited. I get to “send money to Jane today,” not press 10 buttons around their org chart.

It requires rethinking

The first time I showed my parents a mouse and told them to double-click on it I thought they were having a fit on it. The cursor would move in jerks and often get lost. The same dismay and disdain I once had for them, I now feel every time I try to use voice. I have to reprogram my brain to think about information in a new way and to reconsider how my brain works. While this will happen, it will take time.

What gets interesting is what happens to the 8-year-olds who grow up thinking of voice first, what happens when developing nations embrace tablets with voice not desktop PCs to educate. When people grow up with something, their native understanding of what it means and what it makes possible changes. It’s going to be fascinating to see what becomes of this canvas.

Voice as a connective layer

We keep being dumb and thinking of voice as being the way to interact with “a” machine and not as a glue between all machines. Voice is an inherently crap way to get outputs; if a picture states a thousand words, how long will it take to buy a t-shirt. The real value of voice is as a user interface across all devices. Advertising in magazines should offer voice commands to find out more. You should be able to yell at the Netflix carousel, or at TV ads to add products to your shopping list. Voice won’t be how we “do” entire things, it will be how we trigger or finish things.


We’ve only ever assumed we talked to devices first. Do I really want to remember the command for turning on lights in the home and utter six words to make it happen? Do I want to always be asking. Assuming devices are select in when they speak first, it’s fun to see what happens when voice is proactive. Imagine the possibilities:

  • “Welcome home, would you like me to select evening lighting?”
  • “You’re running late for a meeting, should I order an Uber to take you there?”
  • “Your normal Citi Bike station has no bikes right now.”
  • “While it looks sunny now, it’s going to rain later.”


While many think we don’t want to share personal information, there are ample signs that if we get something in return, we trust the company and there is transparency, it’s OK. Voice will not develop alone, it will progress alongside Google suggesting emails replies, Amazon suggesting things to buy, Siri contextually suggesting apps to use. We will slowly become used to the idea of outsourcing our thinking and decisions somewhat to machines.

We’ve already outsourced a lot; we can’t remember phone numbers, addresses, birthdays — we even rely on images to jar our recollection of experiences, so it’s natural we’ll outsource some decisions.

The medium-term future in my eyes is one where we allow more data to be used to automate the mundane. Many think that voice is asking Alexa to order Duracell batteries, but it’s more likely to be never thinking about batteries or laundry detergent or other low consideration items again nor the subscriptions to be replenished.

There is an expression that a computer should never ask a question for which it can reasonably deduce the answer itself. When a technology is really here we don’t see, notice or think about it. The next few years will see voice automation take over many more aspects of our lives. The future of voice may be some long sentences and some smart commands, but mostly perhaps it’s simply grunts of yes.



Is ethical tech a farce?

17:30 | 5 December

Shannon Farley Contributor
Shannon Farley is co-founder and executive director at Fast Forward.

In the past year, we’ve seen tech platforms called out for stoking hate and not doing enough to instill ethics from the top down. There is a bigger problem at hand, and to borrow Silicon Valley parlance, it’s a feature, not a bug. When profits trump impact, ethics lose out.

Tech companies become profitable or lose their shirts on engagement rates. Cute cats gets clicks. So do incendiary comments from world leaders. Engagement rates drive top line growth. Social media platforms are designed to advance voices that drive engagement, regardless of the impact of that engagement. The problem is not just fueling hate; it’s a singular focus on a specific kind of value creation – profits. We have yet to see a tech sector leader optimize for profit and ethics with the same fervor. One always wins.

If profits beat ethics, is ethical tech possible? Simply put, yes. There is a different genre of tech startup that values impact over profits. They are tech nonprofits. Rather than building products that satisfy animalistic behavior, from screen addiction to fear mongering, tech nonprofits are building technology to fill gaps in basic human needs – education, human rights, health care. Or as an early tech nonprofit Mozilla stated in its manifesto, technology that, “must enrich the lives of human beings.” Tech nonprofits are building tech products that serve customers where markets have failed.

Their primary goal is not a profit-proxy like engagement rates. Contrarily, they are designed to make the lives of human beings better – not just the ones whose clicks have market value. As the profit-focused tech community grapples with how to reverse its impact, and how to take a step back and consider building products with ethics in mind, tech nonprofits are the one clear example of ethical tech.

Take Khan Academy. Khan’s mission is to provide a free, world class education to anyone, anywhere. Khan has many for-profit competitors with new competitors launching everyday. Those tech companies are laser focused on educating those with perceived market value whereas Khan is designed with the rest of the population in mind, and not just those who can afford it.

Crisis Text Line, born out of a recognized need for text-based crisis support for youth, is so impatient to solve the problems their users face that the organization open sources its data to inform journalists, school systems, and citizens, encouraging collaborative support in reducing and preventing these crises.

There are hundreds more tech nonprofits that value impact over profit and improve the lives of human beings. Platforms created to teach anyone to write well like Quill, or prevent teen pregnancy like RealTalk, or end veteran suicide like Objective Zero. These tech nonprofits will never measure success based on revenue or engagement. They exist to serve humanity. Tech nonprofits function as a social safety net. Engagement and profits couldn’t be further from the goal.

All of these companies started with the problem and leveraged the tech we use everyday to solve it. In this moment, the tech community is reckoning with what it’s built. It may take a generation or two to fix what we broke. It will certainly take an emerging breed of technologists, those who measure value in more than money, to model ethical tech.

So is ethical tech a farce? It doesn’t have to be. Tech nonprofits prove that.



Commercial insurtech is like an exclusive club — and Google and Amazon aren’t invited

01:00 | 5 December

Chris Downer Contributor
Chris Downer is a principal at XL Innovate, focusing on insurtech investments in North America, Europe and Asia.

Tech companies and VCs in the insurance space have probably read many of the news articles about Amazon and Google entering insurance (herehere and here). Given their nearly unlimited resources, this may be intimidating to some in the industry. Whether one views these moves as a threat, a welcomed development or something in-between, it’s important to note that both Google and Amazon have focused almost exclusively on personal lines, which is only one aspect of insurance.

There are many reasons for this — not least of which is Google and Amazon’s desire to add value to their customers who are, for the most part, consumers. Because the customer always comes first, most expect Amazon and Google to stay firmly focused on personal lines.

There is, however, another massive tranche of insurance that is ready for innovation: commercial lines. Commercial insurance is often frighteningly complex and requires too much inside information for tech companies to find it attractive. For the time being, when it comes to commercial insurance, Amazon and Google are firmly on the outside looking in. This competitive moat is one of many reasons that interest in commercial insurtech is heating up.

At the same time, there have been shockingly few commercial-focused startups so far, compared to personal lines companies. According to a recent report from Deloitte, just over $57 million was directed to commercial insurtechs in the first half of 2018 — or 6.6 percent of total insurtech-related funding in that period. In 2017, Deloitte reported a far higher proportion of 11.4 percent. Meanwhile, our analysis at XL Innovate, based on CB Insights data, shows that over $1 billion has been invested in companies that are addressing commercial insurance since 2015, which equates to roughly 10 percent of total insurtech investment.

So, regardless of how you slice it, commercial insurtech startups have been woefully underfinanced relative to insurtech companies addressing personal lines, distribution and other areas. As a result, commercial insurtech is heavily under-penetrated relative to the broader insurtech movement.

Why is this?

The story of the first insurtech wave is similar to many stories across the tech landscape: New ventures were driven by entrepreneurs from outside the industry looking to disrupt what they knew (auto insurance, renters/homeowners insurance or distribution). It’s natural, then, that initial efforts have focused on individual policies and more evident aspects of the insurance market.

Even existing commercial ventures have been concentrated in more obvious areas like distribution and auto. In fact, since 2015, those two categories account for more than half of commercial insurtech funding to date. Nearly all of the current major commercial ventures are in these spaces. Here are some of the highlights:

  • Distribution: Next Insurance, a full-stack commercial insurer, has raised $130 million. CoverHound and Policygenius, meanwhile, have raised just north of $50 million each. Distribution, in particular, has accounted for half the dollars invested across insurtech. Unsurprisingly, this is a trend that persists in the commercial space.
  • Auto: Nauto has raised more than $174 million, and players such as Nexar and ZenDrive are making noise in their own right on the financing side ($44 million for Nexar and $20 million for ZenDrive).

Only a few startups are looking at more complex areas, such as providing higher-quality property intelligence for commercial underwriters. Cape Analytics, for example, uses computer vision to extract information from aerial imagery automatically. This gives insurers access to recent, impactful data for any address across the nation, at time of rating and underwriting, and allows them to better evaluate risk throughout a policy lifecycle.

Why does this matter? Well, for example, according to Cape data, 8 percent of roofs in the U.S. are of poor or severe quality. Buildings with bad-quality roofs have a 50 percent higher loss potential than those with high-quality roofs — they have both a higher likelihood of submitting a claim and, if a claim is submitted, the loss payout is larger. For insurers, knowing the roof condition of a commercial building before providing a quote can help the insurer price policies more accurately and avoid heavy losses. This kind of data is indispensable to commercial insurers, but was unavailable until now.

Insurance insiders who are intimately aware of the current status quo should be excited.

Or take Windward, a marine risk analytics company, as another example. Windward is able to track every ship’s operational profile and can provide insights on the ship’s geography, weather, port visits, management and navigation. On a practical basis, this means Windward can track whether a ship is sailing at dangerous depths at night, as vessels traveling for longer periods at night at dangerous depths are 2.6x more likely to have contact accidents the following year. Windward can also track when a ship passes through traffic lanes. Vessels traveling for long periods in congested traffic lanes are 2x more likely to have collision accidents the following year. This is the kind of information marine insurers need to have on hand.

Still, there is way more headroom in the space.

Commercial has enormous potential given the magnitude of the market and relative lack of awareness of problems outside of insurance. Global commercial property and casualty insurance premiums were worth approximately $730 billion in 2017. By 2021, it will rise to almost $900 billion. Meanwhile, only insiders truly understand facultative reinsurance; or understand how hull insurance is written and who writes it; or how to improve large commercial property insurance. If an entrepreneur comes from outside the industry, these are challenging markets and workflows to understand, let alone disrupt or improve.

On the other hand, insurance insiders who are intimately aware of the current status quo should be excited. The insurtech space now needs these insiders to become more involved, start new ventures, raise capital and help identify and solve the most meaningful problems in commercial insurance. Insiders, whether they be underwriters, actuaries, claims professionals or anyone else who has spent time within the industry, know the pain points, the pitfalls and the potential solutions.

Tackling the commercial space will be more challenging. Assets are larger and volume smaller, meaning learnings will be slower to come by and technologies like AI less effective in the short term. For example, if an insurer is underwriting 350 marine policies and there are only 15 claims per year, when is there enough data to drive statistically significant findings? Commercial lines still rely heavily on human judgment and manual processing. This is not a problem in personal lines because of the immense volume of data that can be harnessed and analyzed. So, although the opportunity is fantastic, it is important to keep in mind that the timeline to impact will likely be longer.

Those involved in the insurance technology wave have many reasons to be excited about commercial insurance, but patience will be key as new ventures look to tackle longstanding issues and as the space heats up. Luckily for entrepreneurs with a unique understanding of the industry, tech companies like Amazon and Google are not in a position to threaten the space for the foreseeable future.



Robot couriers scoop up early-stage cash

22:00 | 1 December

Much of the last couple of decades of innovation has centered around finding ways to get what we want without leaving the sofa.

So far, online ordering and on-demand delivery have allowed us to largely accomplish this goal. Just point, click and wait. But there’s one catch: Delivery people. We can never all lie around ordering pizzas if someone still has to deliver them.

Enter robots. In tech-futurist circles, it’s pretty commonplace to hear predictions about how some medley of autonomous vehicles and AI-enabled bots will take over doorstep deliveries in the coming years. They’ll bring us takeout, drop off our packages and displace lots of humans who currently make a living doing these things.

If this vision does become reality, there’s a strong chance it’ll largely be due to a handful of early-stage startups currently working to roboticize last-mile delivery. Below, we take a look at who they are, what they’re doing, who’s backing them and where they’re setting up shop.

The players

Crunchbase data unearthed at least eight companies in the robot delivery space with headquarters or operations in North America that have secured seed or early-stage funding in the past couple of years.

They range from heavily funded startups to lean seed-stage operations. Silicon Valley-based Nuro, an autonomous delivery startup founded by former engineers at Alphabet’s Waymo, is the most heavily funded, having raised $92 million to date. Others have raised a few million.

In the chart below, we look at key players, ranked by funding to date, along with their locations and key investors.

Who’s your backer?

While startups may be paving the way for robot delivery, they’re not doing so alone. One of the ways larger enterprises are keeping a toehold in the space is through backing and partnering with early-stage startups. They’re joining a long list of prominent seed and venture investors also eagerly eyeing the sector.

The list of larger corporate investors includes Germany’s Daimler, the lead investor in Starship Technologies. China’s Tencent, meanwhile, is backing San Francisco-based Marble, while Toyota AI Ventures has invested in Boxbot.

As for partnering, takeout food delivery services seem to be the most active users of robot couriers.

Starship, whose bot has been described as a slow-moving, medium-sized cooler on six wheels, is making particularly strong inroads in takeout. The San Francisco- and Estonia-based company, launched by Skype founders Janus Friis and Ahti Heinla, is teaming up with DoorDash and Postmates in parts of California and Washington, DC. It’s also working with the Domino’s pizza chain in Germany and the Netherlands.

Robby Technologies, another maker of cute, six-wheeled bots, has also been partnering with Postmates in parts of Los Angeles. And Marble, which is branding its boxy bots as “your friendly neighborhood robot,” teamed up last year for a trial with Yelp in San Francisco.

San Francisco Bay Area dominates

While their visions of world domination are necessarily global, the robot delivery talent pool remains rather local.

Six of the eight seed- and early-stage startups tracked by Crunchbase are based in the San Francisco Bay Area, and the remaining two have some operations in the region.

Why is this? Partly, there’s a concentration of talent in the area, with key engineering staff coming from larger local companies like Uber, Tesla and Waymo . Plus, of course, there’s a ready supply of investor capital, which bot startups presumably will need as they scale.

Silicon Valley and San Francisco, known for scarce and astronomically expensive housing, are also geographies in which employers struggle to find people to deliver stuff at prevailing wages to the hordes of tech workers toiling at projects like designing robots to replace them.

That said, the region isn’t entirely friendly territory for slow-moving sidewalk robots. In San Francisco, already home to absurdly steep streets and sidewalks crowded with humans and discarded scooters, city legislators voted to ban delivery robots from most places and severely restrict them in areas where permitted.

The rise of the pizza delivery robot manager

But while San Francisco may be wary of a delivery robot invasion, other geographies, including nearby Berkeley, Calif., where startup Kiwi Campus operates, have been more welcoming.

In the process, they’re creating an interesting new set of robot overseer jobs that could shed some light on the future of last-mile delivery employment.

For some startups in early trial mode, robot wrangling jobs involve shadowing bots and making sure they carry out their assigned duties without travails.

Remote robot management is also a thing and will likely see the sharpest growth. Starship, for instance, relies on operators in Estonia to track and manage bots as they make their deliveries in faraway countries.

For now, it’s too early to tell whether monitoring and controlling hordes of delivery bots will provide better pay and working conditions than old-fashioned human delivery jobs.

At least, however, much of it could theoretically be done while lying on the sofa.



Reddit co-founder Alexis Ohanian brings Armenian brandy to the US

21:45 | 1 December

Brett Moskowitz Contributor
Brett Moskowitz writes about cocktails and spirits for a variety of publications, including Food & Wine, Esquire, Saveur, Tasting Table, Liquor.com, Neatpour, and others. He lives in New York City.

When Alexis Ohanian, co-founder of Reddit, approached the members-only spirits subscription club Flaviar about bringing an Armenian brandy to market, he saw it as a unique opportunity to honor his paternal heritage.

“My father’s side all fled during the genocide,” Ohanian told me in an interview. “He grew up pretty Americanized, but the food and drink were the big parts of the culture that were passed down.”

Ohanian spent some time in Armenia as an adult and became acquainted with the tradition of taking shots of the local brandy, called “konyak,” out of sliced apricots. And he wanted to expose Americans to the highest-quality aged Armenian brandy.

So when he received the investor update about Son of a Peat, a whiskey that Flaviar made for its members, he saw that the company was capable of bringing its own spirits to market. It was a new direction for Flaviar, and he thought it opened the door for him to have his own product. Ohanian decided to pitch his idea to the team.

“For most Americans, this is their first exposure. If we can make it a thing in America, I’d love to pull that off,” Ohanian said. “It’s not often that I do this with one of the companies I invest with; I chatted to Jugo Petkovic and Grisa [Soba], the founders of Flaviar, about creating my own spirit. I proposed the idea of Armenian brandy. They were like, ‘this is weird, but we’ll look into it.’ 

He says that there are not many Armenian exports that people are aware of and has a hunch that more than just Armenians will like the brandy. “I hope I can be a good ambassador for it.”

Flaviar agreed to make the brandy for Ohanian who decided to call it Shakmat — “chess” in Armenian. “Chess is a huge part of the Armenian identity. So is Armenian konyak.”

The launch of Shakmat is an expansion of the relationship between Ohanian and Flaviar, which recently began developing its own spirit brands after initial success building a consumer base with subscription deliveries of spirit-tasting boxes.

Since beginning operations in 2012, Flaviar has grown to include thousands of annual subscribers in the U.S. and Europe. In addition, the $210 yearly fee gets members access to live tastings and discounts on exclusive bottlings and private labels.

After initial funding from a local angel investor, and later one of the first investments from Speedinvest’s first fund, says Petkovic, Flaviar went through Y Combinator in the summer of 2014.

It was there where Petkovic and his co-founder Grisa Soba met Ohanian. “He, among several other YC partners, ended up investing personally, as well. We raised a few undisclosed rounds of funding since then.”

Ohanian says that what drew him to Flaviar was its unique approach to connecting with consumers in the spirits marketplace. “This was before ‘direct to consumer’ was a thing,” Ohanian says. “The state laws around liqueur sales were starting to change, because of e-commerce. And what [Flaviar] realized was that you could build a relationship with customers around liquor. We can focus on curating really great juice. And we have enough credibility now that we can make our own.”

The privately held company purchased competitor Caskers.com earlier this year and has an ambitious vision based on the idea that spirits remain inaccessible to most consumers who are interested in educating themselves about what’s out there.

“We always envisioned Flaviar as a lifestyle club to which members would belong for years,” says Petkovic. “We believe new products are best discovered through curated selection, education and engagement with a community of people who share your passion.”

Shakmat launched in the U.S. on November 12. Ohanian says he was thrilled at the opportunity to provide a platform for a deep cultural tradition and to donate 10 percent of revenues to the community by supporting Armenian reforestation efforts.  (Armenian forests were severely depleted during the Soviet occupation because of the need to use wood as an energy source.)

The first run of Shakmat includes 2,400 bottles of the 80 proof (40% ABV) 23-year-old XO brandy, but don’t be surprised if another run hits the market soon. Bottles can be purchased by Flaviar members for $95 and non-members for $110.



The battle over the driving experience is heating up and will be won in software  

17:30 | 1 December

Lou Shipley Contributor
More posts by this contributor

Sirius XM’s recent all-stock $3.5-billion purchase of the music-streaming service Pandora raised a lot of eyebrows. A big question was why Sirius paid so much. Is Pandora’s music library and customer base really worth that amount? The answer is that this was a strategic move by Sirius in a battle that is far bigger than radio. The real battle, which will become much more visible in the coming years, is over the driving experience.

People spend a lot of time commuting in their cars. That time is fixed and won’t likely change. However, what is changing is the way we drive. We’re already seeing many new cars with driver assist features, and automakers (and tech companies) are working hard to bring fully autonomous cars to the market as quickly as possible. New cars today already contain an average of 100 million lines of code that can be updated to increase driver assist options, and some automakers like Tessla already offer an “autonomous” mode on highways.

According to the Brookings Institute, one-quarter of all cars will be autonomous by 2040 and IHS predicts all cars will be autonomous after 2050. Those are conservative estimates, as we are likely to see major changes in the next 10 years.

These changes will impact the driving experience. As cars become more autonomous, we can do more than simply listen to music or podcasts. We may be able to watch videos, surf the web, and more. The value of car real estate is already valuable, but it’s going to skyrocket as we change the way people consume media while driving.

The Pandora acquisition was a strategic move by Sirius to gain the necessary assets so that it won’t fall behind in this space — and to get into the fast-growing music streaming business, where users consume music at home, work and at play.  While Pandora’s music library is arguably second tier, it’s also good enough that it can provide pretty much every artist most people want. This is often how high-priced mergers happen – one party is concerned about falling behind and pays a premium to purchase the other company’s assets. It’s also a bet by Sirius about the driving experience of the future.

As the battle over the driving experience heats up, we will initially see companies like Google, Amazon and Apple start dipping their toes in the market. They might do that through investments in startups, rolling out their own services, or purchasing competitors. Some of those large tech companies already have projects around autonomous cars. Uber may even be interested in this market.

For now, Sirius probably doesn’t need to worry about competition from startups. They won’t be able to grow big enough fast enough to get a sizable share of the market. A more likely scenario is that startups will work on software that offers a unique functionality, making it an attractive acquisition target by a larger company.

This is going to be an interesting battle to watch in the coming years, as cars essentially become software with four wheels attached. Companies like Sirius know this is an important space and that the battle over the driving experience will be won in software. The acquisition of Pandora is only the beginning.



The attributes that define the increasingly critical Data-as-a-Service industry

01:00 | 30 November

Alex Rosen Contributor
Alex Rosen is managing director at Ridge Ventures.

It’s now common in tech to describe data as “the new oil or electricity” — a fuel that will power innovation and company growth for the foreseeable future. However, data is far from a novel industry. In fact, it’s a decades-old market, and many successful data companies, such as Bloomberg, LiveRamp (now Axciom), Oracle Data Cloud and Nielsen have been built in the past and serve as industry leaders… for the time being.

Still, a few characteristics separate today’s world of data businesses from those in the past. The market for data is increasing in size at a rapid rate, mostly due to new methods of measurement (like mobile phones, IoT sensors and satellite imagery) that generate new forms of information, as well as new, prevalent use cases like AI, which rely on huge quantities of high-quality data to work (emphasis on the high-quality).

These changes have led to unprecedented demand for data outside of traditionally data-hungry markets, like finance, marketing and real estate. They have also led to an iteration of data company that’s being classified as Data-as-a-Service (DaaS) — companies like Datanyze (acquired by ZoomInfo), Safegraph, Clearbit, PredictHQ and DataFox. DaaS stresses higher velocity, higher-quality, near real-time data that can support more rigorous needs, such as training machine learning algorithms. Non-financial corporations are more than happy to ingest external data that will help them streamline their operations, supply chain and marketing.

In this evolving world of Data as a Service, there are a few attributes that lead to a successful company:

DaaS must serve a big enough market. This seems like an obvious point, but too many entrepreneurs assume they can easily sell large volumes of high-quality data. Even though data is in higher demand than ever, the ability to use it and integrate it into general customer workflows has not been democratized. Music downloads and charts, for example, is valuable data, but the customer segment is not large enough at this point and a few players dominate the market. Social media or influencer ranking data, like Klout, is similar. There are many categories of real-time data that does not have the size or impact necessary to sustain a large-scale DaaS business.

DaaS is not about disruption, it’s about empowerment. Many startups want to “disrupt” a space, but DaaS companies need to focus on integrating into existing workflows rather than demanding customers change how they do business. This requires deep customer knowledge, easy integration and data that immediately provide value to the business. Potential customers have seen the buzz around Big Data, Hadoop and business intelligence, but the only thing they talk about is dashboard fatigue. It’s important that DaaS companies focus on seamless integration and solving a well-defined customer problem.

DaaS should have increasing incremental margin. Data businesses often have significant COGS, particularly at small scale. However, as a data business gets larger, the gross margins can improve dramatically. So it’s really important to understand whether the cost of acquiring or generating data changes as you bring on new customers. I call this incremental margin; the change in the difference between cost of producing data and how much that data can be sold for. If your gross margin is significantly higher for your fiftieth customer than it was for your first, then you are on your way to building a venture-backable business (or, if the margin is high enough, you may not even need VC backing at all). This increasing margin is a key pillar in building out a large, sustainable DaaS company.

It is data quality, velocity and margins that will decide whether or not a startup is successful in the long run.

DaaS must be machine readable. Today, data accuracy is increasingly powering company innovation, and quality becomes more important as data is used for AI training purposes. If a company is using data for something like a marketing campaign, it’s not critical if the data is of poor quality. Moreover, people have accepted a rock-bottom level to date — often 80 percent of marketing data may be erroneous. However, when data is being used to power AI applications and machine learning algorithms, low data quality could be disastrous. In other words, DaaS must be machine readable. Some data may need to be cleaned up; Trifacta is an example of a company that provides the tools to ensure higher-quality data. Other companies, such as Crowdflower (now Figure Eight), Mighty AI and Samasource label data and clean it up for algorithmic use.

DaaS must have continuous movement. In other words, there should be continuous value in data getting refreshed. A successful DaaS company does not provide data to serve a one-time use case; rather, the data should have a combination of velocity (change over time; days or hours) and inherent value in knowing the changes that are occurring. The higher the data velocity, the more value potential exists within that company’s data. Real estate or stock market data are good examples of value increasing with greater velocity.

DaaS must tell a story. Numbers are no longer enough. DaaS companies must provide the tools and analytics or AI to unlock data, identify trends and then provide context around those trends. AI is particularly useful in finding correlations across data sets that humans would never know to look for. Safegraph, which produces granular location data, provides us with some great examples of this. Location data is far more than the sum of its parts when it contains enough velocity and accuracy. For example, when paired with ZIP Code-based income data, location data can tell us quite a bit about food deserts and their disproportionate effect on poorer households that have to travel three times farther to get to a grocery store. Or, location data can tell us about the vast differences in travel patterns across different cities — information that is critical in the development of autonomous vehicles, where different vehicle types and considerations will be necessary for different use cases.

The above attributes are ones that differentiate DaaS businesses from more traditional data companies. Startups looking to build sustainable, high-growth companies should heed these critical elements. As the need for AI-enhanced products grows, DaaS will only grow with it — but it is data quality, velocity and margins that will decide whether or not a startup is successful in the long run. As demand for DaaS increases, I expect we’ll also see an entire industry of data marketplaces and data cleaning products and services built around it.



The crusade against open source abuse

20:30 | 29 November

Salil Deshpande Contributor
Salil Deshpande serves as the managing director of Bain Capital Ventures. He focuses on infrastructure software and open source.

There’s a dark cloud on the horizon. The behavior of cloud infrastructure providers, such as Amazon, threatens the viability of open source. I first wrote about this problem in a prior piece on TechCrunch. In 2018, thankfully, several leaders have mobilized (amid controversy) to propose multiple solutions to the problem. Here’s what’s happened in the last month.

The Problem

Go to Amazon Web Services (AWS) and hover over the Products menu at the top. You will see numerous open-source projects that Amazon did not create, but runs as-a-service. These provide Amazon with billions of dollars of revenue per year. To be clear, this is not illegal. But it is not conducive to sustainable open-source communities, and especially commercial open-source innovation.

Two Solutions

In early 2018, I gathered together the creators, CEOs or general counsels of two dozen at-scale open-source companies, along with respected open source lawyer Heather Meeker, to talk about what to do.

We wished to define a license that prevents cloud infrastructure providers from running certain software as a commercial service, while at the same time making that software effectively open source for everyone else, i.e., everyone not running that software as a commercial service.

With our first proposal, Commons Clause, we took the most straightforward approach: we constructed one clause, which can be added to any liberal open source license, preventing the licensee from “Selling” the software — where “Selling” includes running it as a commercial service. (Selling other software made with Commons Clause software is allowed, of course.) Applying Commons Clause transitions a project from open source to source-available.

We also love the proposal being spearheaded by another participant, MongoDB, called the Server Side Public License (SSPL). Rather than prohibit the software from being run as a service, SSPL requires that you open-source all programs that you use to make the software available as a service, including, without limitation, management software, user interfaces, application program interfaces, automation software, monitoring software, backup software, storage software and hosting software, all such that a user could run an instance of the service. This is known as a “copyleft.”

These two licenses are two different solutions to exactly the same problem. Heather Meeker wrote both solutions, supported by feedback organized by FOSSA.

The initial uproar and accusations that these efforts were trying to “trick” the community fortunately gave way to understanding and acknowledgement from the open source community that there is a real problem to be solved here, that it is time for the open source community to get real, and that it is time for the net giants to pay fairly for the open source on which they depend.

In October, one of the board members of the Apache Software Foundation (ASF) reached out to me and suggested working together to create a modern open source license that solves the industry’s needs.

Kudos to MongoDB

Further kudos are owed to MondoDB for definitively stating that they will be using SSPL, submitting SSPL in parallel to an organization called Open Source Initiative (OSI) for endorsement as an open source license, but not waiting for OSI’s endorsement to start releasing software under the SSPL.

OSI, which has somehow anointed itself as the body that will “decide” whether a license is open source, has a habit of myopically debating what’s open source and what’s not. With the submission of SSPL to OSI, MongoDB has put the ball in OSI’s court to either step up and help solve an industry problem, or put their heads back in the sand.

In fact, MongoDB has done OSI a huge favor. MongoDB has gone and solved the problem and handed a perfectly serviceable open source license to OSI on a silver platter.

Open Source Sausage

The public archives of OSI’s debate over SSPL are at times informative and at times amusing, bordering on comical. After MongoDB’s original submission, there were rah-rah rally cries amongst the members to find reasons to deem SSPL not an open source license, followed by some +1’s. Member John Cowan reminded the group that just because OSI does not endorse a license as open source, does not mean that it is not open source:

As far as I know (which is pretty far), the OSI doesn’t do that. They have never publicly said “License X is not open source.” People on various mailing lists have done so, but not the OSI as such. And they certainly don’t say “Any license not on our OSI Certified ™ list is not open source”, because that would be false. It’s easy to write a license that is obviously open source that the OSI would never certify for any of a variety of reasons.

Eliot Horowitz (CTO and co-founder of MongoDB) responded cogently to questions, comments and objections, concluding with:

In short, we believe that in today’s world, linking has been superseded by the provision of programs as services and the connection of programs over networks as the main form of program combination. It is unclear whether existing copyleft licenses clearly apply to this form of program combination, and we intend the SSPL to be an option for developers to address this uncertainty.

Much discussion ensued about the purpose, role and relevance of OSI. Various sundry legal issues were raised or addressed by Van Lindberg, McCoy Smith, and Bruce Perens.

Heather Meeker (the lawyer who drafted both Commons Clause and SSPL) stepped in and completely addressed the legal issues that had been raised thus far. Various other clarifications were made by Eliot Horowitz, and he also conveyed willingness to change the wording of the license if it would help.

Discussion amongst the members continued about the role, relevance and purpose of OSI, with one member astutely noting that there were a lot of “free software” wonks in the group, attempting to bastardize open source to advocate their own agenda:

If, instead, OSI has decided that they are now a Free Software organization, and that Free Software is what “we” do, and that “our” focus is on “Free software” then, then let’s change the name to the Free Software Initiative and open the gates for some other entity, who is all about Open Source, to take on that job, and do it proudly. :-)

There was debate over whether SSPL discriminates against types of users, which would disqualify it from being open source. Eliot Horowitz provided a convincing explanation that it did not, which seemed to quiet the crowd.

Heather Meeker dropped some more legal knowledge on the group, which seemed to sufficently address outstanding issues. Bruce Perens, the author of item 6 of the so-called open source definition, acknowledged that SSPL does not violate item 6 or item 9 of the definition, and subsequently suggested revising item 9 such that SSPL would violate it:

We’re not falling on our swords because of this. And we can fix OSD #9 with a two word addition “or performed” as soon as the board can meet. But it’s annoying.

Kyle Mitchell, himself an accomplished open source lawyer, opposed such a tactic. Larry Rosen pointed out that some members’ assertion (that “it is fundamental to open source that everyone can use a program for any purpose”) is untrue. Still more entertaining discussion ensued about the purpose of OSI and the meaning of open source.

Carlos Piana succinctly stated why SSPL was indeed open source. Kyle Mitchell added that if licenses were to be judged in the manner that the group was judging SSPL, then GPL v2 was not open source either.


Meanwhile Dor Lior, the founder of database company ScyllaDB compared SSPL and AGPL side-to-side and argued that “MongoDB would have been better off with Commons Clause or just swallowed a hard pill and stayed with APGL.” Player.FM

based on Commons Clause licensed RediSearch, after in-memory database company Redis Labs placed RediSearch and four other specific add-on modules (but not Redis itself) under Commons Clause, and graph database company Neo4J placed its entire codebase under Commons Clause and raised an $80M Series E.

Then Michael DeHaan, creator of Red Hat Ansible, chose Commons Clause for his new project. When asked why he did not choose any of the existing licenses that OSI has “endorsed” to be open source, he said:

This groundswell in 2018 should be ample indication that there is an industry problem that needs to be fixed.

Eliot Horowitz summarized and addressed all the issues, dropped the mic, and left for a while. When it seemed like SSPL was indeed following all the rules of open source licenses, and was garnering support of the members, Brad Kuhn put forward a clumsy argument for why OSI should change the rules as necessary to prevent SSPL from being deemed open source, concluding:

It’s likely the entire “license evaluation process” that we use is inherently flawed.

Mitchell clinched the argument that SSPL is open source with definitive points. Horowitz thanked the members for their comments and offered to address any concerns in a revision, and returned a few days later with a revised SSPL.

OSI has 60 days since MongoDB’s new submission to make a choice:

  1. Wake up and realize that SSPL, perhaps with some edits, is indeed an open source license, OR
  2. Effectively signal to the world that OSI does not wish to help solve the industry’s problems, and that they’d rather be policy wonks and have theoretical debates.

“Wonk” here is meant in the best possible way.

Importantly, MongoDB is proceeding to use the SSPL regardless. If MongoDB were going to wait until OSI’s decision, or if OSI were more relevant, we might wait with bated breath to hear whether OSI would endorse SSPL as an open source license.

As it stands, OSI’s decision is more important to OSI itself, than to the industry. It signals whether OSI wants to remain relevant and help solve industry problems or whether it has become too myopic to be useful. Fearful of the latter, we looked to other groups for leadership and engaged with the Apache Software Foundation (ASF) when they reached out in the hopes of creating a modern open source license that solves the industry’s needs.

OSI should realize that while it would be nice if they deemed SSPL to be open source, it is not critical. Again in the words of John Cowan, just because OSI has not endorsed a license as open source, does not mean it’s not open source. While we greatly respect almost all members of industry associations and the work they do in their fields, it is becoming difficult to respect the purpose and process of any group that anoints itself as the body that will “decide” whether a license is open source — it is arbitrary and obsolete.


In my zest to urge the industry to solve this problem, in an earlier piece, I had said that “if one takes open source software that someone else has built and offers it verbatim as a commercial service for one’s own profit” (as cloud infrastructure providers do) that’s “not in the spirit” of open-source. That’s an overstatement and thus, frankly, incorrect. Open source policy wonks pointed this out. I obviously don’t mind rattling their cages but I should have stayed away from making statements about “what’s in the spirt” so as to not detract from my main argument.


The behavior of cloud infrastructure providers poses an existential threat to open source. Cloud infrastructure providers are not evil. Current open source licenses allow them to take open source software verbatim and offer it as a commercial service without giving back to the open source projects or their commercial shepherds. The problem is that developers do not have open source licensing alternatives that prevent cloud infrastructure providers from doing so. Open source standards groups should help, rather than get in the way. We must ensure that authors of open source software can not only survive, but thrive. And if that means taking a stronger stance against cloud infrastructure providers, then authors should have licenses available to allow for that. The open source community should make this an urgent priority.


I have not invested directly or indirectly in MongoDB. I have invested directly or indirectly in the companies behind the open source projects Spring, Mule, DynaTrace, Ruby Rails, Groovy Grails, Maven, Gradle, Chef, Redis, SysDig, Prometheus, Hazelcast, Akka, Scala, Cassandra, Spinnaker, FOSSA, and… in Amazon.


<< Back Forward >>
Topics from 1 to 10 | in all: 4300

Site search

Last comments

Walmart retreats from its UK Asda business to hone its focus on competing with Amazon
Peter Short
Good luck
Peter Short

Evolve Foundation launches a $100 million fund to find startups working to relieve human suffering
Peter Short
Money will give hope
Peter Short

Boeing will build DARPA’s XS-1 experimental spaceplane
Peter Short
Peter Short

Is a “robot tax” really an “innovation penalty”?
Peter Short
It need to be taxed also any organic substance ie food than is used as a calorie transfer needs tax…
Peter Short

Twitter Is Testing A Dedicated GIF Button On Mobile
Peter Short
Sounds great Facebook got a button a few years ago
Then it disappeared Twitter needs a bottom maybe…
Peter Short

Apple’s Next iPhone Rumored To Debut On September 9th
Peter Short
Looks like a nice cycle of a round year;)
Peter Short

AncestryDNA And Google’s Calico Team Up To Study Genetic Longevity
Peter Short
I'm still fascinated by DNA though I favour pure chemistry what could be
Offered is for future gen…
Peter Short

U.K. Push For Better Broadband For Startups
Verg Matthews
There has to an email option icon to send to the clowns in MTNL ... the govt of India's service pro…
Verg Matthews

CrunchWeek: Apple Makes Music, Oculus Aims For Mainstream, Twitter CEO Shakeup
Peter Short
Noted Google maybe grooming Twitter as a partner in Social Media but with whistle blowing coming to…
Peter Short

CrunchWeek: Apple Makes Music, Oculus Aims For Mainstream, Twitter CEO Shakeup
Peter Short
Noted Google maybe grooming Twitter as a partner in Social Media but with whistle blowing coming to…
Peter Short