Blog of the website «TechCrunch» Прогноз погоды

People

John Smith

John Smith, 48

Joined: 28 January 2014

Interests: No data

Jonnathan Coleman

Jonnathan Coleman, 32

Joined: 18 June 2014

About myself: You may say I'm a dreamer

Interests: Snowboarding, Cycling, Beer

Andrey II

Andrey II, 41

Joined: 08 January 2014

Interests: No data

David

David

Joined: 05 August 2014

Interests: No data

David Markham

David Markham, 65

Joined: 13 November 2014

Interests: No data

Michelle Li

Michelle Li, 41

Joined: 13 August 2014

Interests: No data

Max Almenas

Max Almenas, 53

Joined: 10 August 2014

Interests: No data

29Jan

29Jan, 31

Joined: 29 January 2014

Interests: No data

s82 s82

s82 s82, 26

Joined: 16 April 2014

Interests: No data

Wicca

Wicca, 36

Joined: 18 June 2014

Interests: No data

Phebe Paul

Phebe Paul, 26

Joined: 08 September 2014

Interests: No data

Артем Ступаков

Артем Ступаков, 98

Joined: 29 January 2014

About myself: Радуюсь жизни!

Interests: No data

sergei jkovlev

sergei jkovlev, 59

Joined: 03 November 2019

Interests: музыка, кино, автомобили

Алексей Гено

Алексей Гено, 8

Joined: 25 June 2015

About myself: Хай

Interests: Интерес1daasdfasf, http://apple.com

ivanov5056 Ivanov

ivanov5056 Ivanov, 69

Joined: 20 July 2019

Interests: No data



Main article: Artificial intelligence

<< Back Forward >>
Topics from 1 to 10 | in all: 2900

Lawyers hate timekeeping. Ping raises $13M to fix it with AI

16:33 | 12 November

Counting billable time in six minute increments is the most annoying part of being a lawyer. It’s a distracting waste. It leads law firms to conservatively under-bill. And it leaves lawyers stuck manually filling out timesheets after a long day when they want to go home to their families.

Life is already short, as Ping CEO and co-founder Ryan Alshak knows too well. The former lawyer spent years caring for his mother as she battled a brain tumor before her passing. “One minute laughing with her was worth a million doing anything else” he tells me. “I became obsessed with the idea that we spend too much of our lives on things we have no need to do — especially at work.”

That’s motivated him as he’s built his startup Ping, which uses artificial intelligence to automatically track lawyers’ work and fill out timesheets for them. There’s a massive opportunity to eliminate a core cause of burnout, lift law firm revenue by around 10%, and give them fresh insights into labor allocation.

Ping co-founder and CEO Ryan Alshak. Image Credit: Margot Duane

That’s why today Ping is announcing a $13.2 million Series A led by Upfront Ventures, along with BoxGroup, First Round, Initialized, and Ulu Ventures. Adding to Ping’s quiet $3.7 million seed led by First Round last year, the startup will spend the cash to scale up enterprise distribution and become the new timekeeping standard.

I was a corporate litigator at Manatt Phelps down in LA and joke that I was voted the world’s worst timekeeper” Alshak tells me. “I could either get better at doing something I dreaded or I could try and build technology that did it for me.”

The promise of eliminating the hassle could make any lawyer who hears about Ping an advocate for the firm buying the startup’s software, like how Dropbox grew as workers demanded easier file sharing. “I’ve experienced first-hand the grind of filling out timesheets” writes Initialized partner and former attorney Alda Leu Dennis. “Ping takes away the drudgery of manual timekeeping and gives lawyers back all those precious hours.”

Traditionally, lawyers have to keep track of their time by themselves down to the tenth of an hour — reviewing documents for the Johnson case, preparing a motion to dismiss for the Lee case, a client phone call for Sriram case. There are timesheets built into legal software suites like MyCase, legal billing software like Timesolv, and one-off tools like Time Miner and iTimeKeep. They typically offer timers that lawyers can manually start and stop on different devices, with some providing tracking of scheduled appointments, call and text logging, and integration with billing systems.

Ping goes a big step further. It uses AI and machine learning to figure out whether an activity is billable, for which client, a description of the activity, and its codification beyond just how long it lasted. Instead of merely filling in the minutes, it completes all the logs automatically with entries like “Writing up a deposition – Jenkins Case – 18 minutes”. Then it presents the timesheet to the user for review before the send it to billing.

The big challenge now for Alshak and the team he’s assembled is to grow up. They need to go from cat-in-sunglasses logo Ping to mature wordmark Ping.  “We have to graduate from being a startup to being an enterprise software company” the CEO tells meThat means learning to sell to C-suites and IT teams, rather than just build solid product. In the relationship-driven world of law, that’s a very different skill set. Ping will have to convince clients it’s worth switching to not just for the time savings and revenue boost, but for deep data on how they could run a more efficient firm.

Along the way, Ping has to avoid any embarrassing data breaches or concerns about how its scanning technology could violate attorney-client privilege. If it can win this lucrative first business in legal, it could barge into the consulting and accounting verticals next to grow truly huge.

With eager customers, a massive market, a weak status quo, and a driven founder, Ping just needs to avoid getting in over its heads with all its new cash. Spent well, the startup could leap ahead of the less tech-savvy competition.

Alshak seems determined to get it right. “We have an opportunity to build a company that gives people back their most valuable resource — time — to spend more time with their loved ones because they spent less time working” he tells me. “My mom will live forever because she taught me the value of time. I am deeply motivated to build something that lasts . . . and do so in her name.”

 


0

Dutch court orders Facebook to ban celebrity crypto scam ads after another lawsuit

15:04 | 12 November

A Dutch court has ruled that Facebook can be required to use filter technologies to identify and pre-emptively take down fake ads linked to crypto currency scams that carry the image of a media personality, John de Mol.

The Dutch celerity filed a lawsuit against Facebook in April over the misappropriation of his likeness to shill Bitcoin scams via fake ads run on its platform.

In an immediately enforceable preliminary judgement today the court has ordered Facebook to remove all offending ads within five days, and provide de Mol with data on the accounts running them within a week.

Per the judgement, victims of the crypto scams had reported a total of €1.7 million (~$1.8M) in damages to the Dutch government at the time of the court summons.

The case is similar to a legal action instigated by UK consumer advice personality, Martin Lewis, last year, when he announced defamation proceedings against Facebook — also for misuse of his image in fake ads for crypto scams.

Lewis withdrew the suit at the start of this year after Facebook agreed to apply new measures to tackle the problem: Namely a scam ads report button. It also agreed to provide funding to a UK consumer advice organization to set up a scam advice service.

In the de Mol case the lawsuit was allowed to run its course — resulting in today’s preliminary judgement against Facebook. It’s not yet clear whether the company will appeal but in the wake of the ruling Facebook has said it will bring the scam ads report button to the Dutch market early next month.

In court, the platform giant sought to argue that it could not more proactively remove the Bitcoin scam ads containing de Mol’s image on the grounds that doing so would breach EU law against general monitoring conditions being placed on Internet platforms.

However the court rejected that argument, citing a recent ruling by Europe’s top court related to platform obligations to remove hate speech, also concluding that the specificity of the requested measures could not be classified as ‘general obligations of supervision’.

It also rejected arguments by Facebook’s lawyers that restricting the fake scam ads would be restricting the freedom of expression of a natural person, or the right to be freely informed — pointing out that the ‘expressions’ involved are aimed at commercial gain, as well as including fraudulent practices.

Facebook also sought to argue it is already doing all it can to identify and take down the fake scam ads — saying too that its screening processes are not perfect. But the court said there’s no requirement for 100% effectiveness for additional proactive measures to be ordered. Its ruling further notes a striking reduction in fake scam ads using de Mol’s image since the lawsuit was announced

Facebook’s argument that it’s just a neutral platform was also rejected, with the court pointing out that its core business is advertising.

It also took the view that requiring Facebook to apply technically complicated measures and extra effort, including in terms of manpower and costs, to more effectively remove offending scam ads is not unreasonable in this context.

The judgement orders Facebook to remove fake scam ads containing de Mol’s likeness from Facebook and Instagram within five days of the order — with a penalty of €10k per day that Facebook fails to comply with the order, up to a maximum of €1M (~$1.1M).

The court order also requires that Facebook provides de Mol with data on the accounts that had been misusing his image within seven days of the judgement, with a further penalty of €1k per day for failure to comply, up to a maximum of €100k.

Facebook has also been ordered to pay the case costs.

Responding to the judgement in a statement, a Facebook spokesperson told us:

We have just received the ruling and will now look at its implications. We will consider all legal actions, including appeal. Importantly, this ruling does not change our commitment to fighting these types of ads. We cannot stress enough that these types of ads have absolutely no place on Facebook and we remove them when we find them. We take this very seriously and will therefore make our scam ads reporting form available in the Netherlands in early December. This is an additional way to get feedback from people, which in turn helps train our machine learning models. It is in our interest to protect our users from fraudsters and when we find violators we will take action to stop their activity, up to and including taking legal action against them in court.

One legal expert describes the judgement as “

“. Law professor Mireille Hildebrandt told us that it provides for as an alternative legal route for Facebook users to litigate and pursue collective enforcement of European personal data rights. Rather than suing for damages — which entails a high burden of proof.

Injunctions are faster and more effective, Hildebrandt added.

The judgement also raises

around the burden of proof for demonstrating Facebook has removed scam ads with sufficient (increased) accuracy; and what specific additional measures it might deploy to improve its takedown rate.

Although the introduction of the ‘report scam ad button’ does provide one clear avenue for measuring takedown performance.

The button was finally rolled out to the UK market in July. And while Facebook has talked since the start of this year about ‘envisaging’ introducing it in other markets it hasn’t exactly been proactive in doing so — up til now, with this court order. 

 


0

Facebook machine learning aims to modify faces, hands, and… outfits

23:53 | 11 November

The latest research out of Facebook sets machine learning models to tasks that, to us, seem rather ordinary — but for a computer are still monstrously difficult. These projects aim to anonymize faces, improvise hand movements, and — perhaps hardest of all — give credible fashion advice.

The research here was presented recently at the International Conference on Computer Vision, among a few dozen other papers from the company, which has invested heavily in AI research, computer vision in particular.

Modifying faces in motion is something we’ve all come to associate with “deepfakes” and other nefarious applications. But the Facebook team felt there was actually a potentially humanitarian application of the technology.

Deepfakes use a carefully cultivated understanding of the face’s features and landmarks to map one person’s expressions and movements onto a completely different face. The Facebook team used the same features and landmarks, but instead uses them to tweak the face just enough that it’s no longer recognizable to facial recognition engines.

This could allow someone who, for whatever reason, wants to appear on video but not be recognized publicly to do so without something as clunky as a mask or completely fabricated face. Instead, they’d look a bit like themselves, but with slightly wider-set eyes, a thinner mouth, higher forehead, and so on.

The system they created appears to work well, but would of course require some optimization before it can be deployed as a product. But one can imagine how useful such a thing might be, either for those at risk of retribution from political oppressors or more garden variety privacy preferences.

In virtual spaces it can be difficult to recognize someone at all — partly because of the lack of nonverbal cues we perceive constantly in real life. This next piece of research attempts to capture, catalogue, and reproduce these movements, or at least the ones we make with our hands.

It’s a little funny to think about, but really there’s not a lot of data on how exactly people move their hands when they talk. So the researchers recorded 50 full hours of pairs of people having ordinary conversations — or as ordinary as they could while suited up in high-end motion capture gear.

These (relatively) natural conversations, and the body and hand motions that went with them, were then ingested by the machine learning model; it learned to associate, for example, that when people said “back then” they’d point behind them, or when they said “all over the place,” they’d make a sweeping gesture.

What might this be used for? More natural-seeming conversations in virtual environments, perhaps, but maybe also by animators who’d like to base the motions of their characters in real life without doing motion capture of their own. It turns out that the database Facebook put together is really like nothing else out there in scale or detail, which is valuable in and of itself.

Similarly unique, but arguably more frivolous, is this system meant to help you improve your outfit. If we’re going to have smart mirrors, they ought to be able to make suggestions, right?

Fashion++ is a system that, having ingested a large library of images labeled with both the pieces worn (e.g. hat, scarf, skirt) and overall fashionability (obviously a subjective measure), can then look at a given outfit and suggest changes. Nothing major — it isn’t that sophisticated — but rather small things like removing a layer or tucking in a shirt.

It’s far from a digital fashion assistant, but the paper documents early success in making suggestions for outfits that real people found credible and perhaps even a good idea. That’s pretty impressive given how complex this problem proves to be when you really consider it, and how ill-defined “fashionable” really is.

Facebook’s ICCV research shows that the company and its researchers are looking fairly broadly at the question of what computer vision can accomplish. It’s always nice to detect faces in a photo faster or more accurately, or infer location from the objects in a room, but clearly there are many more obscure or surprising aspects of digital life that could be improved with a little visual intelligence. You can check out the rest of the papers here.

 


0

A chat about UK deep tech and spin-out success with Octopus Ventures

11:50 | 11 November

New research commissioned by UK VC firm Octopus Ventures has put a spotlight on which of the country’s higher education institutions are doing the most to support spin outs. The report compiles a ranking of universities, foregrounding those with a record of producing what partner Simon King dubs “quality spin outs”.

The research combines and weights five data points — looking at university spinouts’ relative total funding as a means of quantifying exit success, for example. The idea for the Enterpreneurial Impact Ranking, as it’s been called, is to identify not just those higher education institutions with a track record of encouraging academics to set up a business off the back of a piece of novel work but those best at identifying the most promising commercialization opportunities — ultimately leading to spinout success (such as an exit where the company was sold for more than it raised).

Hence the report looking at data over almost a ten year period (2009-2018) to track spin-outs as they progress from an idea in the lab through prototyping to getting a product to market.

The ranking looks at five factors in all: Total funding per university; total spinouts created per university; total disclosures per university; total patents per university; and total sales from spinouts per university.

Topping the ranking is Queen’s University Belfast which the report notes has had a number of notable successes via its commercialization arm, Qubis, name checking the likes of Kainos (digital services), Andor Technology (scientific imaging) and Fusion Antibodies (therapeutics & diagnostics), all of whom have been listed on the London Stock Exchange.

The index ranks the top 100 UK universities on this entrepreneurial impact benchmark — but the rest of the top ten are as follows:

2) University of Cambridge
3) Cardiff University
4) Queen Mary University of London
5) University of Leeds
6) University of Dundee
7) University of Nottingham
8) King’s College London
9) University of Oxford
10) Imperial College London

Octopus Ventures says the ranking will help it to get a better handle on which universities to spend more time with as it searches for its next deep tech investment.

It also wants to increase visibility into how the UK is doing when it comes to commercializing academic research to feed further growth of the ecosystem by sharing best practice, per King.

“We are looking at a number of data points which are all self-reported by the universities themselves to the Higher Education Statistics Agency. And then we combine those in the way that we think brings out at a higher level which universities are doing a good job of spinning out companies,” he says.

“It means that you take into consideration which university is producing quality spin-outs. So it’s not just spray and pray and get lots of stuff out there. But actually which universities are creating spin-outs that then go on to return value back to them.”

 


0

Microsoft uses AI to diagnose cervical cancer faster in India

16:00 | 9 November

More women in India die from cervical cancer than in any other country. This preventable disease kills around 67,000 women in India every year, more than 25% of the 260,000 deaths worldwide.

Effective screening and early detection can help reduce its incidence, but part of the challenge — and there are several parts — today is that the testing process to detect the onset of the disease is unbearably time-consuming.

This is because the existing methodology that cytopathologists use is time consuming to begin with, but also because there are very few of them in the nation. Could AI speed this up?

At SRL Diagnostics, the largest chain to offer diagnostic services in pathology and radiology in India, we are getting an early look of this. Last year, Microsoft partnered with SRL Diagnostics to co-create an AI Network for Pathology to ease the burden of cytopathologists and histopathologists.

SRL Diagnostics receives more than 100,000 Pap smear samples every year. About 98% of these samples are typically normal and only the remaining 2% samples require intervention. “We were looking for ways to ensure our cytopathologists were able to find those 2% abnormal samples faster,” explained Dr. Arnab Roy, Technical Lead for New Initiatives & Knowledge Management at SRL Diagnostics.

Cytopathologists at SRL Diagnostics studied digitally scanned versions of Whole Slide Imaging (WSI) slides, each comprising about 300-400 cells, manually and marked their observations, which were used as training data for Cervical Cancer Image Detection API.

A digitally scanned version of a Whole Slide Imaging (WSI) slide, which is used to train the AI model

Then there was the challenge of subjectivity. “Different cytopathologists examine different elements in a smear slide in a unique manner even if the overall diagnosis is the same. This is the subjectivity element in the whole process, which many a time is linked to the experience of the expert,” reveals Dr. Roy.

Manish Gupta, Principal Applied Researcher at Microsoft Azure Global Engineering, who worked closely with the team at SRL Diagnostics, said the idea was to create an AI algorithm that could identify areas that everybody was looking at and “create a consensus on the areas assessed.”

Cytopathologists across multiple labs and locations annotated thousands of tile images of cervical smear. They created discordant and concordant notes on each sample image.

“The images for which annotations were found to be discordant — that is if they were viewed differently by three team members — were sent to senior cytopathologists for final analysis,” Microsoft wrote in a blog post.

This week, the two revealed that their collaboration has started to show results. SRL Diagnostics has started an internal preview to use Cervical Cancer Image Detection API. The Cervical Cancer Image Detection API, which runs on Microsoft’s Azure, can quickly screen liquid-based cytology slide images for detection of cervical cancer in the early stages and return insights to pathologists in labs, the two said.

The AI model can now differentiate between normal and abnormal smear slides with accuracy and is currently under validation in labs for a period of three to six months. It can also classify smear slides based on the seven-subtypes of cervical cytopathological scale, the two wrote in a blog post.

During the internal preview period, the exercise will use more than half-a-million anonymized digital tile images. Following internal validation, the API will be previewed in external cervical cancer diagnostic workflows, including hospitals and other diagnostic centers.

“Cytopathologists now have to review fewer areas, 20 as of now, on a whole slide liquid-based cytology image and validate the positive cases thus bringing in greater efficiency and speeding up the initial screening process,” Microsoft wrote.

“The API has the potential of increasing the productivity of a cytopathology section by about four times. In a future scenario of automated slide preparation with assistance from AI, cytopathologists can do a job in two hours what would earlier take about eight hours!” Dr. Roy said.

SRL Diagnostics-Microsoft consortium said they are hopeful their APIs could find application in other fields of pathology such as diagnosis of kidney pathologies and in oral, pancreatic and liver cancers. The consortium also aims to expand its reach with tie-ups with private players and governments and expand the reach of the model even in remote geographies where the availability of histopathologists is a challenge.

The announcement this week is the latest example of Microsoft’s ongoing research work in India. The world’s second most populous nation has become a test bed for many American technology companies to build new products and services that solve local challenges as they look for their next billion users worldwide.

Last week, Microsoft announced its AI project was helping improve the way driving tests are conducted in India. The company has unveiled a score of tools for the Indian market in the last two years. Microsoft has previously developed tools to help farmers in India increase their crop yields and worked with hospitals to prevent avoidable blindness. Last year, the company partnered with Apollo Hospitals to create an AI-powered API customized to predict risk of heart diseases in India.

Also last year, the company also worked with cricket legend Anil Kumble to develop a tracking device that helps youngsters analyze their batting performance. Microsoft has also tied up with insurance firm ICICI Lombard to help it process customers’ repair claims and renew lapsed policies using an AI system.

 


0

How Microsoft is trying to become more innovative

00:52 | 8 November

Microsoft Research is a globally distributed playground for people interested in solving fundamental science problems.

These projects often focus on machine learning and artificial intelligence, and since Microsoft is on a mission to infuse all of its products with more AI smarts, it’s no surprise that it’s also seeking ways to integrate Microsoft Research’s innovations into the rest of the company.

Across the board, the company is trying to find ways to become more innovative, especially around its work in AI, and it’s putting processes in place to do so. Microsoft is unusually open about this process, too, and actually made it somewhat of a focus this week at Ignite, a yearly conference that typically focuses more on technical IT management topics.

At Ignite, Microsoft will for the first time present these projects externally at a dedicated keynote. That feels similar to what Google used to do with its ATAP group at its I/O events and is obviously meant to showcase the cutting-edge innovation that happens inside of Microsoft (outside of making Excel smarter).

To manage its AI innovation efforts, Microsoft created the Microsoft AI group led by VP Mitra Azizirad, who’s tasked with establishing thought leadership in this space internally and externally, and helping the company itself innovate faster (Microsoft’s AI for Good projects also fall under this group’s purview). I sat down with Azizirad to get a better idea of what her team is doing and how she approaches getting companies to innovate around AI and bring research projects out of the lab.

“We began to put together a narrative for the company of what it really means to be in an AI-driven world and what we look at from a differentiated perspective,” Azizirad said. “What we’ve done in this area is something that has resonated and landed well. And now we’re including AI, but we’re expanding beyond it to other paradigm shifts like human-machine interaction, future of computing and digital responsibility, as more than just a set of principles and practices but an area of innovation in and of itself.”

Currently, Microsoft is doing a very good job at talking and thinking about horizon one opportunities, as well as horizon three projects that are still years out, she said. “Horizon two, we need to get better at, and that’s what we’re doing.”

It’s worth stressing that Microsoft AI, which launched about two years ago, marks the first time there’s a business, marketing and product management team associated with Microsoft Research, so the team does get a lot of insights into upcoming technologies. Just in the last couple of years, Microsoft has published more than 6,000 research papers on AI, some of which clearly have a future in the company’s products.

 


0

This robotic arm slows down to avoid the uncanny valley

00:48 | 8 November

Robotic arms can move fast enough to snatch thrown objects right out of the air… but should they? Not unless you want them to unnerve the humans they’re interacting with, according to work out of Disney Research. Roboticists there found that slowing a robot’s reaction time made it feel more normal to people.

Disney has of course been interested in robotics for decades, and the automatons in its theme parks are among the most famous robots in the world. But there are few opportunities for those robots to interact directly with people. Hence a series of research projects at its research division aimed at safe and non-weird robot-human coexistence.

In this case the question was how to make handing over an item to a robot feel natural and non-threatening. Obviously if, when you reached out with a ticket or empty cup, the robot moved like lightning and snapped it out of your hands, that could be seen as potentially dangerous, or at the very least make people nervous.

So the robot arm in this case (attached to an anthropomorphic cat torso) moves at a normal human speed. But there’s also the question of when it should reach out. After all, it takes us humans a second to realize that someone is handing something to us, then to reach out and grab it. A computer vision system might be able to track an object and send the hand after it more quickly, but it might feel strange.

The researchers set up an experiment where the robot hand reached out to take a ring from a person, under three conditions each of speed and delay.

When the hand itself moved quickly, people reported less “warmth” and more “discomfort.” The slow speed performed best on those scores. And hen the hand moved with no delay, it left people similarly uneasy. But interestingly, too long a delay had a similar effect.

Turns out there’s a happy medium that matches what people seem to expect from a hand reaching out to take something from them. Slower movement is better, to a certain point one imagines, and a reasonable but not sluggish delay makes it feel more human.

The handover system detailed in a paper published today (and video below) is robust against the usual circumstances: moving targets, unexpected forces, and so on. It’ll be a while before an Aristocats bot takes your mug from you at a Disney World cafe, but at least you can be sure it won’t snatch it faster than the eye can follow and scare everyone around you.

 


0

Legislators from ten parliaments put the squeeze on Facebook

00:47 | 8 November

The third session of the International Grand Committee on Disinformation, a multi-nation body comprised of global legislators with concerns about the societal impacts of social media giants, has been taking place in Dublin this week — once again without any senior Facebook management in attendance.

The committee was formed last year after Facebook’s CEO Mark Zuckerberg repeatedly refused to give evidence to a wide-ranging UK parliamentary enquiry into online disinformation and the use of social media tools for political campaigns. That snub encouraged joint working by international parliamentarians over a shared concern that’s also a cross-border regulatory and accountability challenge.

But while Zuckerberg still, seemingly, does not feel personally accountable to international parliaments — even as his latest stand-in at today’s committee hearing, policy chief Monika Bickert, proudly trumpeted the fact that 87 per cent of Facebook’s users are people outside the US — global legislators have been growth hacking a collective understanding of nation-state-scale platforms and the deleterious impacts their data-gobbling algorithmic content hierarchies and microtargeted ads are having on societies and democracies around the world.

Incisive questions from the committee today included sceptical scrutiny of Facebook’s claims and aims for a self-styled ‘Content Oversight Board’ it has said will launch next year — with one Irish legislator querying how the mechanism could possibly be independent of Facebook , as well as wondering how a retrospective appeals body could prevent content-driven harms. (On that Facebook seemed to claim that most complaints it gets from users are about content takedowns.)

Another question was whether the company’s planned Libra digital currency might not at least partially be an attempt to resolve a reputational risk for Facebook, of accepting political ads in foreign currency, by creating a single global digital currency that scrubs away that layer of auditability. Bickert denied the suggestion, saying the Libra project is unrelated to the disinformation issue and “is about access to financial services”.

Twitter’s recently announced total ban on political issue ads also faced some critical questioning by the committee, with the company being asked whether it will be banning environmental groups from running ads about climate change yet continuing to take money from oil giants that wish to run promoted tweets on the topic. Karen White, director of public policy, said they were aware of the concern and are still working through the policy detail for a fuller release due later this month.

But it was Facebook that came in for the bulk of criticism during the session, with Bickert fielding the vast majority of legislators’ questions — almost all of which were sceptically framed and some, including from the only US legislator in the room asking questions, outright hostile.

Google’s rep, meanwhile, had a very quiet hour and a half, with barely any questions fired his way. While Twitter won itself plenty of praise from legislators and witnesses for taking a proactive stance and banning political microtargeting altogether.

The question legislators kept returning to during many of today’s sessions, most of which didn’t involve the reps from the tech giants, is how can governments effectively regulate US-based Internet platforms whose profits are fuelled by the amplification of disinformation as a mechanism for driving engage with their service and ads? 

Suggestions varied from breaking up tech giants to breaking down business models that were roundly accused of incentivizing the spread of outrageous nonsense for a pure-play profit motive, including by weaponizing people’s data to dart them with ‘relevant’ propaganda.

The committee also heard specific calls for European regulators to hurry up and enforce existing data protection law — specifically the EU’s General Data Protection Regulation (GDPR) — as a possible short-cut route to shrinking the harms legislators appeared to agree are linked to platforms’ data-reliant tracking for individual microtargeting.

A number of witnesses warned that liberal democracies remain drastically unprepared for the ongoing onslaught of malicious, hypertargeted fakes; that adtech giants’ business models are engineered for outrage and social division as an intentional choice and scheme to monopolize attention; and that even if we’ve now passed “peak vulnerability”, in terms of societal susceptibility to Internet-based disinformation campaigns (purely as a consequence of how many eyes have been opened to the risks since 2016), the activity itself hasn’t yet peaked and huge challenges for democratic nation states remain.

The latter point was made by disinformation researcher Ben Nimmo, director of investigations at Graphika.

Multiple witnesses called for Facebook to be prohibited from running political advertising as a matter of urgency, with plenty of barbed questions attacking its recent policy decision not to fact-check political ads.

Others went further — calling for more fundamental interventions to force reform of its business model and/or divest it of other social platforms it also owns. Given the company’s systematic failure to demonstrate it can be trusted with people’s data that’s enough reason to break it back up into separate social products, runs the argument.

Former Blackberry co-CEO, Jim Ballsillie, espoused a view that tech giants’ business models are engineered to profit from manipulation, meaning they inherently pose a threat to liberal democracies. While investor and former Facebook mentor, Roger McNamee, who has written a critical book about the company’s business model, called for personal data to be treated as a human right — so it cannot be stockpiled and turned into an asset to be exploited by behavior-manipulating adtech giants.

Also giving evidence today, journalist Carole Cadwalladr, who has been instrumental in investigating the Cambridge Analytica Facebook data misuse scandal, suggested no country should be trusting its election to Facebook. She also decried the fact that the UK is now headed to the polls, for a December general election, with no reforms to its electoral law and with key individuals involved in breaches of electoral law during the 2016 Brexit referendum now in positions of greater power to manipulate democratic outcomes. She too added her voice to calls for Facebook to be prohibited from running political ads.

In another compelling testimony, Marc Rotenberg, president and executive director of the Electronic Privacy Information Center (Epic) in Washington DC, recounted the long and forlorn history of attempts by US privacy advocates to win changes to Facebook’s policies to respect user agency and privacy — initially from the company itself, before petitioning regulators to try to get them to enforce promises Facebook had renaged on, yet still getting exactly nowhere.

No more ‘speeding tickets’

“We have spent the last many years trying to get the FTC to act against Facebook and over this period of time the complaints from many other consumer organizations and users have increased,” he told the committee. “Complaints about the use of personal data, complaints about the tracking of people who are not Facebook users. Complaints about the tracking of Facebook users who are no longer on the platform. In fact in a freedom of information request brought by Epic we uncovered 29,000 complaints now pending against the company.”

He described the FTC judgement against Facebook, which resulted in a $5BN penalty for the company in June, as both a “historic fine” but also essentially just a “speeding ticket” — because the regulator did not enforce any changes to its business model. So yet another regulatory lapse.

“The FTC left in place Facebook’s business practices and left at risk the users of the service,” he warned, adding: “My message to you today is simple: You must act. You cannot wait. You cannot wait ten years or even a year to take action against this company.”

He too urged legislators to ban the company from engaging in political advertising — until “adequate legal safeguards are established”. “The terms of the GDPR must be enforced against Facebook and they should be enforced now,” Rotenberg added, calling also for Facebook to be required to divest of WhatsApp — “not because of a great scheme to break up big tech but because the company violated its commitments to protect the data of WhatsApp users as a condition of the acquisition”.

In another particularly awkward moment for the social media giant, Keit Pentus-Rosimannus, a legislator from Estonia, asked Bickert directly why Facebook doesn’t stop taking money for political ads.

The legislator pointed out that it has already claimed revenue related to such ads is incremental for its business, making the further point that political speech can simply be freely posted to Facebook (as organic content); ergo, Facebook doesn’t need to take money from politicians to run ads that lie — since they can just post their lies freely to Facebook.

Bickert had no good answer to this. “We think that there should be ways that politicians can interact with their public and part of that means sharing their views through ads,” was her best shot at a response.

“I will say this is an area we’re here today to discuss collaboration, with a thought towards what we should be doing together,” she added. “Election integrity is an area where we have proactively said we want regulation. We think it’s appropriate. Defining political ads and who should run them and who should be able to and when and where. Those are things that we would like to work on regulation with governments.”

“Yet Twitter has done it without new regulation. Why can’t you do it?” pressed Pentus-Rosimannus.

“We think that it is not appropriate for Facebook to be deciding for the world what is true or false and we think that politicians should have an ability to interact with their audiences. So long as they’re following our ads policies,” Bickert responded. “But again we’re very open to how together we could come up with regulation that could define and tackle these issues.”

tl;dr Facebook could be seen once again deploying a policy minion to push for a ‘business as usual’ strategy that functions by seeking to fog the issues and re-frame the notion of regulation as a set of self-serving (and very low friction) ‘guide-rails’, rather than as major business model surgery.

Bickert was doing this even as the committee was hearing from multiple voices making the equal and opposite point with acute force.

Another of those critical voices was congressman David Cicilline — a US legislator making his first appearance at the Grand Committee. He closely questioned Bickert on how a Facebook user seeing a political ad that contains false information would know they are being targeted by false information, rejecting repeated attempts to misleading reframe his question as just about general targeting data.

“Again, with respect to the veracity, they wouldn’t know they’re being targeted with false information; they would know why they’re being targeted as to the demographics… but not as to the veracity or the falseness of the statement,” he pointed out.

Bickert responded by claiming that political speech is “so heavily scrutinized there is a high likelihood that somebody would know if information is false” — which earned her a withering rebuke.

“Mark Zuckerberg’s theory that sunlight is the best disinfectant only works if an advertisment is actually exposed to sunlight. But as hundreds of Facebook employees made clear in an open letter last week Facebook’s advanced targeting and behavioral tracking tools — and I quote — “hard for people in the electorate to participate in the public scrutiny that we’re saying comes along with political speech” — end quote — as they know — and I quote — “these ads are often so microtargeted that the conversations on Facebook’s platforms are much more siloed than on the other platforms,” said Cicilline.

“So, Ms Bickert, it seems clear that microtargeting prevents the very public scrutiny that would serve as an effective check on false advertisements. And doesn’t the entire justification for this policy completely fall apart given that Facebook allows politicians both to run fake ads and to distribute those fake ads only to the people most vulnerable to believe in them? So this is a good theory about sunlight but in fact in practice you policies permit someone to make false representations and to microtarget who gets them — and so this big public scrutiny that serves as a justification just doesn’t exist.”

Facebook’s head of global policy management responded by claiming there’s “great transparency” around political ads on its platform — as a result of what she dubbed its “unprecedented” political ad library.

“You can look up any ad in this library and see what is the breakdown on the audience who has seen this ad,” she said, further claiming that “many [political ads] are not microtargeted at all”.

“Isn’t the problem here that Facebook has too much power — and shouldn’t we be thinking about breaking up that power rather than allowing Facebook’s decisions to continue to have such enormous consequences for our democracy?” rejoined Cicilline, not waiting for an answer and instead laying down a critical statement. “The cruel irony is that your company is invoking the protections of free speech as a cloak to defend your conduct which is in fact undermining and threatening the very institutions of democracy it’s cloaking itself in.”

The session was long on questions for Facebook and short on answers with anything other than the most self-serving substance from Facebook.

Major GDPR enforcements coming in 2020

During a later session without any of the tech giants present which was intended for legislators to query the state of play of regulation around online platforms, Ireland’s data protection commissioner, Helen Dixon, signalled that no major enforcements will be coming against Facebook et al this year — saying instead that decisions on a number of cross-border cases will be coming in 2020.

Ireland has a plate stacked high with complaints against tech giants since the GDPR came into force in May 2018. Among the 21 “large scale” investigations into big tech companies that remain ongoing are probes around transparency and the lawfulness of data processing by social media platform giants.

The adtech industry’s use of personal data in the real-time bidding programmatic process is also under the regulatory microscope.

Dixon and the Irish Data Protection Commission (DPC) takes center stage as a regulator for US tech giants given how many of these companies have chosen to site their international headquarters in Ireland — encouraged by business friendly corporate tax rates.

The DPC has a pivot al role on account of a one-stop-shop mechanism within the regulation that allows for a data protection agency with primary jurisdiction over a data controller to take a lead on cross-border data processing cases, with other EU member states’ DPAs able to feed but not lead such a complaint.

Some of the Irish DPC’s probes have already lasted as long as the 18 months since GDPR came into force across the bloc.

Dixon argued today that this is still a reasonable timeframe for enforcing an updated data protection regime, despite signalling further delay before any enforcements in these major cases. “It’s a mistake to say there’s been no enforcement… but there hasn’t been an outcome yet to the large scale investigations we have open, underway into the big tech platforms around lawfulness, transparency, privacy by design and default and so on. Eighteen months is not a long time. Not all of the investigations have been open for 18 months,” she said.

“We must follow due process or we won’t secure the outcome in the end. These companies they’ve market power but they also have the resources to litigate forever. And so we have to ensure we follow due process, we allow them a right to be heard, we conclude the legal analysis carefully by applying what our principles in the GDPR to the scenarios at issue and then we can hope to deliver the outcomes that the GDPR promises.

“So that work is underway. We couldn’t be working more diligently at it. And we will have the first sets of decisions that will start rolling out in the very near term.”

Asked by the committee about the level of cooperation the DPC is getting from the tech giants under investigation she said they are “engaging and cooperating” — but also that they’re “challenging at every turn”.

She also expressed a view that it’s not yet clear whether GDPR enforcement will be able to have a near-term impact on reining in any behaviors found to be infringing the law, given further potential legal push back from platforms after decisions are issued.

“The regulated entities are obliged under the GDPR to cooperate with investigations conducted by the data protection authority, and to date of the 21 large-scale investigations were have opened into big tech organizations they are engaging and cooperating. With equal measure they’re challenging at every turn as well and seeking constant clarifications around due process but they are cooperating and engaging,” she told the committee.

“What remains to be seen is how the investigations we currently have open will conclude. And whether there will ultimately be compliance with the outcomes of those investigations or whether they will be subject to lengthy challenge and so on. So I think the big question of whether we’re going to be able to near-term drive the kind of outcomes we want is still an open question. And it awaiting us as a data protection authority to put down the first final decisions in a number of cases.”

She also expressed doubt about whether the GDPR data protection framework will, ultimately, sum to a tool that can  regulate underlying business models that are based on collecting data for the purpose of behavioral advertising.

“The GDPR isn’t set up to tackle business models, per se,” she said. “It’s set up to apply principles to data processing operations. And so there’s a complexity when we come to look at something like adtech or online behavioral advertising in that we have to target multiple actors.

“For that reason we’re looking at publishers at the front end, that start the data collection from users — it’s when we first click on a website that the tracking technologies, the pixels, the cookies, the social plug-ins — start the data collection that ultimately ends up categorizing us for the purposes of sponsored stories or ad serving. So we’re looking at that ad exchanges, we’re looking at the real-time bidding system. We’re looking at the front end publishers. And we’re looking at the ad brokers who play an important part in all of this in combining online and offline sources of data. So we’ll apply the principles against those data processing operations, we’ll apply them rigorously. We’ll conclude and then we’ll have to see does that add up to a changing of the underlying business model? And I think the jury is out on that until we conclude.”

Epic’s Rotenberg argued to the contrary on this when asked by the committee for the most appropriate model to use for regulating data-driven platforms — saying that “all roads lead to the GDPR”.

“It’s a set of rights and responsibilities associated with the collection and use of personal data and when companies choose to collect personal data they should be held to account,” he said, suggesting an interpretation of the law that does not require other European data protection agencies to wait for Ireland’s decision on key cross-border cases.

“The Schrems decision of 2015 makes clear that while co-ordinated enforcement anticipated under the GDPR is important, individual DPAs have their own authority to enforce the provisions of the charter — which means that individual DPAs do not need to wait for a coordinated response to bring an enforcement action.”

A case remains pending before Europe’s top court that looks set to lay down a firm rule on exactly that point.

“As a matter of law the GDPR contains the authority within its text to enforce the other laws of the European Union — this is largely about the misuse and the collection and use of personal data for microtargeting,” Rotenberg also argued. “That problem can be addressed through the GDPR but it’s going to take an urgent response. Not a long term game plan.”

When GDPR enforcement decisions do come Dixon suggested they could have a wider impact than only applying to the direct subject, saying there’s an appetite from data processors generally for more guidance on compliance with the law — meaning that both the clarity and deterrence factor derived from large scale platform enforcement decisions could help steer the industry down a reforming path.

Though, again, what exactly those platform enforcements may be remains pending until 2020.

“Probably the first large-scale investigation we’re going to conclude under GDPR is one into the principle of transparency and involving one of the larger platforms,” Dixon also told the committee, responding to a legislator’s question asking if she believes consumers are clear about exactly what they’re giving up when they agree to their information being processed to access a digital service.

“We will shortly be making a decision spelling out in detail how compliance with the transparency obligations under Articles 12 to 14 of the GDPR should look in that context. But it is very clear that users are typically unaware,” she suggested. “For example some of the large platforms do have capabilities for users to completely opt out of personalized ad serving but most users aren’t aware of it. There are also patterns in operation that nudge users in certain directions. So one of the things that [we’re doing] — aside from the hard enforcement cases that we’re going to take — we’ve also published guidance recently for example on that issue of how users are being nudged to make choices that are perhaps more privacy invasive than they might otherwise if they had an awareness.

“So I think there’s a role for us as a regulatory authority, as well as regulating the platforms to also drive awareness amongst users. But it’s an uphill battle, given the scale of what users are facing.”

Asked by the committee about the effectiveness of financial penalties as a tool for platform regulation, Dixon pointed to research that suggests fines alone make no difference — but she highlighted the fact that GDPR affords Europe’s regulators with a far more potent power in their toolbox: The power to order changes to data processing or even ban it altogether.

“It’s our view that we will be obliged to impose fines where we find infringements and so that’s what will happen but we expect that it’s the corrective powers that we apply — the bans on processing, the requirements to bring processing operations into compliance that’s going to have the more significant effects,” she said, suggesting that under her watch the DPC will not shy away from using corrective powers if or when an infringement demands it.

The case for special measures

Also speaking today in a different public forum, Europe’s competition chief, Margrethe Vestager, made a similar point to Dixon’s about the uphill challenge for EU citizens to enforce their rights.

“We have you could call it digital citizens’ rights — the GDPR — but that doesn’t solve the question of how much data can be collected about you,” she said during an on stage interview at the Web Summit conference in Lisbon where she was asked whether platforms should have a fiduciary duty towards users to ensure they are accountable for what they’re distributing. The antitrust commissioner is also set for an expanded digital strategy role in the incoming European Commission.

“We also need better protection and better tools to protect ourselves from leaving a trace everywhere we go,” she suggested. “Maybe we would like to be more able to choose what kind of trace we would leave behind. And that side of the equation will have to be part of the discussion as well. How can we be better protected from leaving that trace of data that allows companies to know so much more about any one of us than we might even realize ourselves?”

“I myself am very happy that I have digital rights my problem is that I find it very difficult to enforce them,” Vestager added. “The only real result of me reading terms and conditions is that I get myself distracted from wanting to read the article that wanted me to tap T&Cs. So we need that to be understandable so that we know what we’re dealing with. And we need software and services that will enable us not to leave the same kind of trace as we would otherwise do… I really hope that the market will also help us here. Because it’s not just for politicians to deal with this — it is also in an interaction with the market that we can find solutions. Because one of the main challenges in dealing with AI is of course that there is a risk that we will regulate for yesterday. And then it’s worth nothing.”

Asked at what point she would herself advocate for big tech companies to be broken up, Vestager said there would need to be a competition case that involves damage that’s extreme enough to justify it. “We don’t have that kind of case right now,” she argued. “I will never exclude that that could happen but so far we don’t have a problem that big that breaking up a company would be the solution.”

She also warned against the risk of potentially creating more problems by framing the problem of platform giants as a size issue — and therefore the solution as breaking the giants up.

“The people advocating it don’t have a model as to have to do this. And if you know this story about an antique creature when you chopped out one head two or seven came up — so there is a risk you do not solve the problem you just have many more problems,” she said. “And you don’t have a way of at least trying to control it. So I am much more in the line of thinking that you should say that when you become that big you get a special responsibility — because you are de facto the rule setter in the market that you own. And we could be much more precise about what that then entails. Because otherwise there’s a risk that the many, many interesting companies they have no chance of competing.”

 


0

Nightfall emerges from stealth with $20M for a cloud-native data loss prevention platform

16:26 | 7 November

Sensitive data leakage is one of the biggest negative side-effects of cloud-based apps and services. Today, a startup that has built an AI-based platform that can detect and take action on that data is coming out of stealth with funding to tackle the issue head-on. Nightfall — which integrates with and then automatically scans structured and unstructured data that appears in apps like Slack, GitHub, AWS and hundreds more for sensitive information, which it then acts to secure — is launching publicly today with $20.3 million in funding.

Nightfall’s CEO Isaac Madan said the startup will use the money to expand the scope of what it can detect and where, and to build out its business overall. The company bills itself as the industry’s first cloud-native data loss prevention platform and while in stealth it started out working with high-growth startups like Grofers and Exabeam, later expanding its customer base to Fortune 500 companies.

The company originally launched as Watchtower AI — a name that Madan, who co-founded the company with Rohan Sathe, said was changed to reflect the expanded scope of the company, not just identifying unstructured data but being able to take actions on it. It had raised angel funding of just under $5 million that previously had not been disclosed. Bain Capital Ventures and Venrock have co-led this latest, bigger round of $15.5 million, with Pear VC (Pejman Nozad); Sri Viswanath, CTO of Atlassian; and Kelvin Beachum, Jr. of the New York Jets all also participating.

If $20.3 million sounds like a sizeable investment for a company that had yet to build up a public profile, that’s partly because of the nature of what the company is doing, and the people behind it.

Madan studied computer science and specifically worked on machine learning research at Stanford, focusing on HR and recruitment data, and has one exit already under his belt (a networking and advice platform called Chalky) while also working as an investor at Venrock and analyst at Pejman Mar Ventures, while Sathe was the lead engineer who had built and scaled Uber Eats.

Between the two of them, their experience spans a range of use-cases of where teams are handling many petabytes of data across multiple applications, which many opportunities for data leaks. The lack of any products on the market to address this was what led them to building Nightfall, Madan said in an interview.

Nightfall is tackling a specific issue in the market. Cloud-based collaboration platforms have been the making of distributed teams, which can use them to communicate with each other and work together, sharing data from different apps to get things done even when they are not in the same physical space. But they have also opened the door to a potential problem when it comes to data protection: the information shared on these platforms can contain sensitive data, so having it on there becomes a security risk.

“Business-critical data exists across different systems like Slack, Github, AWS and other apps, and that means sensitive and financial information can proliferate broadly across an organization’s systems,” Madan said. “We leverage machine learning to discover and classify that data” — which is often unstructured when it appears on these platforms.

Of course, each platform in itself can be secure, “but they don’t account for how users of these apps store and use data,” said Madan.

And this is a big big-data task: there are petabytes of data at play covering hundreds of different applications. Nightfall has built machine learning models to scan all of this, detect the data, and then either provide automatic or options for manual actions to take on it: typically, the options are either to delete, redact or quarantine the data, or notify the relevant teams of it to take appropriate actions. 

As part of this latest round Enrique Salem — notably, the former CEO of Symantec and a partner at Bain Capital Ventures — is joining Nightfall’s Board of Directors.

“Isaac, Rohan, and the Nightfall team are addressing a deep and profound need in the world of information security, where I have spent nearly three decades of my career,” he said. “With the proliferation of data across the cloud, high accuracy content inspection that is easy to operationalize is more important than ever. Nightfall has built a powerful and elegant solution to this problem. We are delighted to have been investors since the very beginning and to continue deepening our partnership with the team.”

 


0

Neural Magic gets $15M seed to run machine learning models on commodity CPUs

17:01 | 6 November

Neural Magic, a startup founded by an MIT professor, who figured out a way to run machine learning models on commodity CPUs, announced a $15 million seed investment today.

Comcast Ventures led the round with participation from NEA, Andreessen Horowitz, Pillar VC and Amdocs. The company had previously received a $5 million pre-seed, making the total raised so far, $20 million.

The company also announced early access to its first product, an inference engine that data scientists can run on computers running CPUs, rather than specialized chips like GPUs or TPUs. That means that it could greatly reduce the cost associated with machine learning projects by allowing data scientists to use commodity hardware.

The idea for this solution came from work by MIT professor Nir Shavit. As he tells it, he was working on neurobiology data in his lab and found a way to use the commodity hardware he had in place. “I discovered that with the right algorithms we could run these machine learning algorithms on commodity hardware, and that’s where the company started,” Shavit told TechCrunch.

He says that there is this false notion that you need these specialized chips or hardware accelerators to have the necessary resources to run these jobs, but he says it doesn’t have to be that way. He says his company, not only allows you to use this commodity hardware, it also works with more modern development approaches like containers and micro services.

“Our vision is to enable data science teams to take advantage of the ubiquitous computing platforms they already own to run deep learning models at GPU speeds — in a flexible and containerized way that only commodity CPUs can deliver,” Shavit explained.

He says this also eliminates the memory limitations of these other approaches because CPUs have access to much greater amounts of memory, and this is a key advantage of his company’s approach over and above the cost savings.

“Yes, running on a commodity processor you get the cost savings of running on a CPU, but more importantly, it eliminates all of these huge commercialization problems and essentially this big limitation of the whole field of machine learning of having to work on small models and small data sets because the accelerators are kind of limited. This is the big unlock of Neural Magic,” he said.

Gil Beyda, Managing Director at lead investor Comcast Ventures sees a huge market opportunity with an approach that lets people use commodity hardware. “Neural Magic is well down the path of using software to replace high-cost, specialized AI hardware. Software wins because it unlocks the true potential of deep learning to build novel applications and address some of the industry’s biggest challenges,” he said in a statement.

 


0
<< Back Forward >>
Topics from 1 to 10 | in all: 2900

Site search


Last comments

Walmart retreats from its UK Asda business to hone its focus on competing with Amazon
Peter Short
Good luck
Peter Short

Evolve Foundation launches a $100 million fund to find startups working to relieve human suffering
Peter Short
Money will give hope
Peter Short

Boeing will build DARPA’s XS-1 experimental spaceplane
Peter Short
Great
Peter Short

Is a “robot tax” really an “innovation penalty”?
Peter Short
It need to be taxed also any organic substance ie food than is used as a calorie transfer needs tax…
Peter Short

Twitter Is Testing A Dedicated GIF Button On Mobile
Peter Short
Sounds great Facebook got a button a few years ago
Then it disappeared Twitter needs a bottom maybe…
Peter Short

Apple’s Next iPhone Rumored To Debut On September 9th
Peter Short
Looks like a nice cycle of a round year;)
Peter Short

AncestryDNA And Google’s Calico Team Up To Study Genetic Longevity
Peter Short
I'm still fascinated by DNA though I favour pure chemistry what could be
Offered is for future gen…
Peter Short

U.K. Push For Better Broadband For Startups
Verg Matthews
There has to an email option icon to send to the clowns in MTNL ... the govt of India's service pro…
Verg Matthews

CrunchWeek: Apple Makes Music, Oculus Aims For Mainstream, Twitter CEO Shakeup
Peter Short
Noted Google maybe grooming Twitter as a partner in Social Media but with whistle blowing coming to…
Peter Short

CrunchWeek: Apple Makes Music, Oculus Aims For Mainstream, Twitter CEO Shakeup
Peter Short
Noted Google maybe grooming Twitter as a partner in Social Media but with whistle blowing coming to…
Peter Short