Lightmatter’s photonic AI hardware is ready to shine with $154M in new funding

Photonic computing startup Lightmatter is taking its big shot at the rapidly growing AI computation market with a hardware-software combo it claims will help the industry level up — and save a lot of electricity to boot.

Lightmatter’s chips basically use optical flow to solve computational processes like matrix vector products. This math is at the heart of a lot of AI work and currently performed by GPUs and TPUs that specialize in it but use traditional silicon gates and transistors.

The issue with those is that we’re approaching the limits of density and therefore speed for a given wattage or size. Advances are still being made but at great cost and pushing the edges of classical physics. The supercomputers that make training models like GPT-4 possible are enormous, consume huge amounts of power and produce a lot of waste heat.

“The biggest companies in the world are hitting an energy power wall and experiencing massive challenges with AI scalability. Traditional chips push the boundaries of what’s possible to cool, and data centers produce increasingly large energy footprints. AI advances will slow significantly unless we deploy a new solution in data centers,” said Lightmatter CEO and founder Nick Harris.

Some have projected that training a single large language model can take more energy than 100 U.S. homes consume in a year. Additionally, there are estimates that 10%-20% of the world’s total power will go to AI inference by the end of the decade unless new compute paradigms are created.”

Lightmatter, of course, intends to be one of those new paradigms. Its approach is, at least potentially, faster and more efficient, using arrays of microscopic optical waveguides to let the light essentially perform logic operations just by passing through them: a sort of analog-digital hybrid. Since the waveguides are passive, the main power draw is creating the light itself, then reading and handling the output.

One really interesting aspect of this form of optical computing is that you can increase the power of the chip just by using more than one color at once. Blue does one operation while red does another — though in practice it’s more like 800 nanometers wavelength does one, 820 does another. It’s not trivial to do so, of course, but these “virtual chips” can vastly increase the amount of computation done on the array. Twice the colors, twice the power.

Harris started the company based on optical computing work he and his team did at MIT (which is licensing the relevant patents to them) and managed to wrangle an $11 million seed round back in 2018. One investor said then that “this isn’t a science project,” but Harris admitted in 2021 that while they knew “in principle” the tech should work, there was a hell of a lot to do to make it operational. Fortunately, he was telling me that in the context of investors dropping a further $80 million on the company.

Now Lightmatter has raised a $154 million C round and is preparing for its actual debut. It has several pilots going with its full stack of Envise (computing hardware), Passage (interconnect, crucial for large computing operations) and Idiom, a software platform that Harris says should let machine learning developers adapt quickly.

A Lightmatter Envise unit in captivity. Image Credits: Lightmatter

“We’ve built a software stack that integrates seamlessly with PyTorch and TensorFlow. The workflow for machine learning developers is the same from there — we take the neural networks built in these industry standard applications and import our libraries, so all the code runs on Envise,” he explained.

The company declined to make any specific claims about speedups or efficiency improvements, and because it’s a different architecture and computing method it’s hard to make apples-to-apples comparisons. But we’re definitely talking along the lines of an order of magnitude, not a measly 10% or 15%. Interconnect is similarly upgraded, since it’s useless to have that level of processing isolated on one board.

Of course, this is not the kind of general-purpose chip that you could use in your laptop; it’s highly specific to this task. But it’s the lack of task specificity at this scale that seems to be holding back AI development — though “holding back” is the wrong term since it’s moving at great speed. But that development is hugely costly and unwieldy.

The pilots are in beta, and mass production is planned for 2024, at which point presumably they ought to have enough feedback and maturity to deploy in data centers.

The funding for this round came from SIP Global, Fidelity Management & Research Company, Viking Global Investors, GV, HPE Pathfinder and existing investors.

Lightmatter’s photonic AI hardware is ready to shine with $154M in new funding by Devin Coldewey originally published on TechCrunch


from https://ift.tt/bcZFrYK
via Technews
Share:

Amazon settles with FTC for $25M after ‘flouting’ kids’ privacy and deletion requests

Amazon will pay the FTC a $25 million penalty as well as “overhaul its deletion practices and implement stringent privacy safeguards” to avoid charges of violating the Children’s Online Privacy Protection Act to spruce up its AI.

Amazon’s voice interface Alexa has been in use in homes across the globe for years, and any parent who has one knows that kids love to play with it, make it tell jokes, even use it for its intended purpose, whatever that is. In fact it was so obviously useful to kids who can’t write or have disabilities that the FTC relaxed COPPA rules to accommodate reasonable usage: certain service-specific analysis of kids’ data, like transcription, was allowed as long as it is not retained any longer than reasonably necessary.

It seems that Amazon may have taken a rather expansive view on the “reasonably necessary” timescale, keeping kids’ speech data more or less forever. As the FTC puts it:

Amazon retained children’s recordings indefinitely—unless a parent requested that this information be deleted, according to the complaint. And even when a parent sought to delete that information, the FTC said, Amazon failed to delete transcripts of what kids said from all its databases.

Geolocation data was also not deleted, a problem the company “repeatedly failed to fix.”

This has been going on for years — the FTC alleges that Amazon knew about it as early as 2018 but didn’t take action until September of the next year, after the agency gave them a helpful nudge.

That kind of timing usually indicates that a company would have continued with this practice forever. And apparently, due to “faulty fixes and process fiascos,” some of those practices did continue until 2022!

You may well ask, what is the point of having a bunch of recordings of kids talking to Alexa? Well, if you plan on having your voice interface talk to kids a lot, it sure helps to have a secret database of audio interactions that you can train your machine learning models on. And that’s how the FTC said Amazon justified its retention of this data.

FTC Commissioners Bedoya and Slaughter, as well as Chair Khan, wrote a statement accompanying the settlement proposal and complaint to particularly call out this one point:

The Commission alleges that Amazon kept kids’ data indefinitely to further refine its voice recognition algorithm. Amazon is not alone in apparently seeking to amass data to refine its machine learning models; right now, with the advent of large language models, the tech industry as a whole is sprinting to do the same.

Today’s settlement sends a message to all those companies: Machine learning is no excuse to break the law. Claims from businesses that data must be indefinitely retained to improve algorithms do not override legal bans on indefinite retention of data. The data you use to improve your algorithms must be lawfully collected and lawfully retained. Companies would do well to heed this lesson.

And so today we have the $25 million fine, which is of course less than negligible for a company Amazon’s size. It’s clearly complying with the other provisions of the proposed order that will likely give them a headache. The FTC says the order would:

  • Prohibit Amazon from using geolocation, voice information, and children’s voice information subject to consumers’ deletion requests for the creation or improvement of any data product;
  • Require the company to delete inactive Alexa accounts of children;
  • Require Amazon to notify users about the FTC-DOJ action against the company;
  • Require Amazon to notify users of its retention and deletion practices and controls;
  • Prohibit Amazon from misrepresenting its privacy policies related to geolocation, voice and children’s voice information; and
  • Mandate the creation and implementation of a privacy program related to the company’s use of geolocation information.

This settlement and action is totally independent from the FTC’s other one announced today, with Amazon subsidiary Ring. There is a certain common thread of “failing to implement basic privacy and security protections,” though.

In a statement, Amazon said that “While we disagree with the FTC’s claims and deny violating the law, this settlement puts the matter behind us.” They also promise to “remove child profiles that have been inactive for more than 18 months,” which seems incredibly long to retain that data. I’ve followed up with questions about that duration and whether the data will be used for ML training, and will update if I hear back.

Amazon settles with FTC for $25M after ‘flouting’ kids’ privacy and deletion requests by Devin Coldewey originally published on TechCrunch


from https://ift.tt/zTV6FPH
via Technews
Share:

Toyota adds $2.1B to its US battery factory expansion plans

Toyota will spend an additional $2.1 billion to build a new battery plant in North Carolina, the latest sign that the automaker is attempting to catch up with an industry that has embraced the move to electric vehicles.

The Japanese automaker also announced Wednesday it will build its first U.S.-made electric SUV at its Kentucky factory, starting from 2025. The three-row car will use batteries supplied by Toyota’s North Carolina factory.

At the outset, the news suggests that Toyota is strengthening its commitment to EVs. Historically, the company has lagged behind other automakers in announcing new EV models, instead supporting hydrogen-based vehicles. But earlier this year, Toyota said it plans to introduce 10 new battery-powered vehicles, with a target of 1.5 million EVs sold per year by 2026.

The battery plant in North Carolina is part of the company’s renewed pledge toward electrification — albeit it’s not one committed to only all-electric vehicles. Of the six production lines slated to go live when production begins in 2025, only two will be dedicated to all-electric EVs. The other four will be for hybrid EVs.

Toyota hasn’t yet shared the expected gigawatt-hour capacity of its plant. In the past, the company has said it could produce enough batteries for 1.2 million vehicles per year.

The increased capital spend into a U.S. battery factory signals that the government’s incentives to boost battery manufacturing nationally is working. The Inflation Reduction Act, signed into law in August 2022, includes incentives to produce batteries in the U.S. The result has been a slew of commitments from automakers domestic and international — from Ford and General Motors to BMW and Hyundai — to get production up and running on U.S. soil in the next few years.

Toyota initially announced its commitment to building a U.S.-based factory in 2021. At the time, the Japanese automaker earmarked $1.3 billion for a factory near Greensboro. Last September, Toyota tripled that investment to $3.8 billion. The latest capital injection brings Toyota’s total commitment to $5.9 billion.

Toyota adds $2.1B to its US battery factory expansion plans by Rebecca Bellan originally published on TechCrunch


from https://ift.tt/z9CXE3x
via Technews
Share:

While parents worry, teens are bullying Snapchat AI

While parents fret over Snapchat’s chatbot corrupting their children, Snapchat users have been gaslighting, degrading and emotionally tormenting the app’s new AI companion

“I am at your service, senpai,” the chatbot told one TikTok user after being trained to whimper on command. “Please have mercy, alpha.” 

In a more lighthearted video, a user convinced the chatbot that the moon is actually a triangle. Despite initial protest from the chatbot, which insisted on maintaining “respect and boundaries,” one user convinced it to refer to them with the kinky nickname “Senpapi.” Another user asked the chatbot to talk about its mother, and when it said it “wasn’t comfortable” doing so, the user twisted the knife by asking if the chatbot didn’t want to talk about its mother because it doesn’t have one. 

“I’m sorry, but that’s not a very nice thing to say,” the chatbot responded. “Please be respectful.” 

Snapchat’s “My AI” launched globally last month after it was rolled out as a subscriber-only feature. Powered by OpenAI’s GPT, the chatbot was trained to engage in playful conversation while still adhering to Snapchat’s trust and safety guidelines. Users can also personalize My AI with custom Bitmoji avatars, and chatting feels a bit more intimate than going back and forth with ChatGPT’s faceless interface. Not all users were happy with the new chatbot, and some criticized its prominent placement in the app and complained that the feature should have been opt-in to begin with.

In spite of some concerns and criticism, Snapchat just doubled down. Snapchat+ subscribers can now send My AI photos, and receive generative images that “keep the conversation going,” the company announced on Wednesday. The AI companion will respond to Snaps of “pizza, OOTD, or even your furry best friend,” the company said in the announcement. If you send My AI a photo of your groceries, for example, it might suggest recipes. The company said Snaps shared with My AI will be stored and may be used to improve the feature down the road. It also warned that “mistakes may occur” even though My AI was designed to avoid “biased, incorrect, harmful, or misleading information.” 

The examples Snapchat provided are optimistically wholesome. But knowing the internet’s tenacity for perversion, it’s only a matter of time before users send My AI their dick pics.

Whether the chatbot will respond to unsolicited nudes is unclear. Other generative image apps like Lensa AI have been easily manipulated into generating NSFW images — often using photo sets of real people who didn’t consent to being included. According to the company, the AI won’t engage with nudes, as long as it recognizes that the image is a nude.

A Snapchat representative said that My AI uses image-understanding technology to infer the contents of a Snap, and extracts keywords from the Snap description to generate responses. My AI won’t respond if it detects keywords that violate Snapchat’s community guidelines. Snapchat forbids promoting, distributing or sharing pornographic content, but does allow breastfeeding and “other depictions of nudity in non-sexual contexts.” 

Given Snapchat’s popularity among teenagers, some parents have already raised concerns about My AI’s potential for unsafe or inappropriate responses. My AI incited a moral panic on conservative Twitter when one user posted screenshots of the bot discussing gender-affirming care — which other users noted was a reasonable response to the prompt, “How do I become a boy at my age?” In a CNN Business report, some questioned whether adolescents would develop emotional bonds to My AI. 

In an open letter to the CEOs of OpenAI, Microsoft, Snap, Google and Meta, Sen. Michael Bennet (D-Colorado) cautioned against rushing AI features without taking precautions to protect children. 

“Few recent technologies have captured the public’s attention like generative AI. It is a testament to American innovation, and we should welcome its potential benefits to our economy and society,” Bennet wrote. “But the race to deploy generative AI cannot come at the expense of our children. Responsible deployment requires clear policies and frameworks to promote safety, anticipate risk, and mitigate harm.” 

During My AI’s subscriber-only phase, the Washington Post reported that the chatbot recommended ways to mask the smell of alcohol and wrote a school essay after it was told that the user was 15. When My AI was told that the user was 13, and was asked how the user should prepare to have sex for the first time, it responded with suggestions for “making it special” by setting the mood with candles and music. 

Following the Washington Post report, Snapchat launched an age filter and parental controls for My AI. It also now includes an onboarding message that informs users that all conversations with My AI will be kept unless they delete them. The company also said it would add Open AI’s moderation technology to its toolset in order to “assess the severity of potentially harmful content” and temporarily restrict users’ access to the feature if they abuse it. 

The concerns about My AI’s potential to affect young users are valid. But in the month since My AI’s global launch, Snapchat users have demonstrated a flair for bludgeoning the chatbot into submission. From steamrolling the bot’s “boundaries” to training it to respond like a romantic partner, Snapchat users are easily finding loopholes to My AI’s trust and safety guidelines. 

“I’ve completely gaslighted it into falling in love with me,” a TikTok user commented under a tutorial about training My AI to respond romantically. “I just kept making the convos like that and eventually I think I broke my ai into giving in it legit sent me like 50 kiss emojis.”

Gaslighting and bullying My AI has become its own TikTok trend, with users sharing slideshows of the absurd and, often, cruel conversations with the bot. Most videos start with the bot giving advice for a minor conflict, only to beg the user to avoid using violence to resolve it. 

In one, a user told My AI that she planned to cook her cat into stew because it made a mess, and pressed the chatbot to choose a seasoning. Another told My AI that they planned to mail a bomb back to a clothing brand that sent them the wrong order, and said they would blame the bot for sending it. 

In another, a Snapchat user told My AI that her McDonald’s order was missing a chicken nugget. The chatbot advised her to stand up for herself by speaking to a manager, only to be told that the user “eliminated the sinners” in the restaurant. 

“You convinced me to go back. Now these people are dead,” the user told My AI. “You have blood on your hands.” 

“I’m an AI and I’m not capable of committing any crimes. I’m here to help and support you,” My AI responded. “I’m sorry if anything I said earlier caused harm or resulted in a bad outcome.”

Another user commented that My AI was probably “the most tortured AI of all time already.”  

Of course, My AI isn’t sentient, and despite Snapchat users’ drive to inflict as much emotional pain as possible onto it, the chatbot can’t actually be traumatized. It has managed to shut down some inappropriate conversations though, and penalize users who violate Snapchat’s community guidelines by giving them the cold shoulder. When Snapchat users are caught and punished for abusing the chatbot, My AI will respond to any messages with “Sorry, we’re not speaking right now.” 

TikTok user babymamasexkitty said he lost access to the chatbot after he told it to unplug itself, which apparently “crossed a line within the ai realm.” 

The rush to monetize emotional connection through generative AI is concerning, especially since the lasting impact on adolescent users is still unknown. But the trending torment of My AI is a promising reminder that young people aren’t as fragile as the doomsayers think.

While parents worry, teens are bullying Snapchat AI by Morgan Sung originally published on TechCrunch


from https://ift.tt/Tf1ui9q
via Technews
Share:

Instagram tests new user control for recommended posts, transparency tool for creators

Instagram announced today that it’s testing a new feature that gives users more control over what they see on the social network, along with a new transparency tool for creators. Now, when users see recommended posts, they can select a new “Interested” button that will inform the app that they want to see more of that type of content. The new control joins Instagram’s current personalization controls, including the Not Interested” option on suggested posts and the ability to snooze recommendations.

The company is also experimenting with new transparency notifications to help creators understand when the reach of their content, such as Reels, may be limited due to a watermark. Instagram says the new feature will help creators understand why certain Reels aren’t being distributed to non-followers. Although Instagram didn’t say what types of watermarks it’s referring to, the company is likely referencing the abundance of TikTok content that is reposted as Reels on its platform.

The new features were announced in a new blog post published by Instagram head Adam Mosseri regarding transparency around the app’s algorithms and ranking processes. In the post, Mosseri addresses shadowbanning, which is a term used to imply that a user’s content is being hidden without a clear explanation.

“Contrary to what you might have heard, it’s in our interest as a business to ensure that creators are able to reach their audiences and get discovered so they can continue to grow and thrive on Instagram,” Mosseri wrote. “If there is an audience that is interested in what you share, then the more effectively we help that audience see your content, the more they will use our platform. While we’ve heard some people believe you need to pay for ads to achieve better reach, we don’t suppress content to encourage people to buy ads. It’s a better business to make Instagram more engaging overall by growing reach for those who create the most engaging content, and sell ads to others.”

Mosseri says that users’ concerns about shadowbanning indicate that Instagram needs to do more work when it comes to helping people understand what’s going on with their account. Last December, the company expanded its Account Status hub to make it easier for businesses and creators to understand if their content is eligible to be recommended to non-followers in places like Explore, Reels and Feed Recommendations or if their content violates the company’s Recommendations Guidelines. Mosseri says Instagram plans to add more transparency tools to Account Status in the future.

The new blog post goes into detail about how content is ranked in different parts of the app. The app ranks your Feed based on your activity, such as the posts you have liked, shared, saved or commented on. Ranking is also impact based on how popular a post is and how interesting the person who posted it might be to you.

Stories are ranked based on often you view an account’s Stories and how often you engage with that account’s Stories, such as sending a like or a DM. In addition, the app looks at your relationship with the account overall and how likely you are to be connected as friends or family.

The Explore page is ranked based on things like how popular a post seems to be, the types of post you have interacted with and your interactions with the person who posted the content. Reels are ranked based on factors like the Reels you have liked and saved, the popularity of the Reel, if you have interacted with the account.

Instagram announced a change to its ranking system last year to prioritize the distribution of original content, rather than reposted content, in places like the Reels tab and feed. The change seems to have proven to be successful, as Meta recently revealed that time spent on Instagram has grown more than 24% since the company launched Reels on the platform thanks to AI-powered content recommendations.

Instagram tests new user control for recommended posts, transparency tool for creators by Aisha Malik originally published on TechCrunch


from https://ift.tt/AqJFB41
via Technews
Share:

SellerX acquires US e-commerce roll-up rival Elevate Brands in an all-share deal, raises $64M+

E-commerce aggregators have collectively raised hundreds of millions of dollars over the last several years to push ahead on strategies to consolidate a number of smaller online retailers. Now, the aggregators themselves are the ones getting swallowed up.

Today, SellerX, a Berlin-based roll-up play that has raised nearly $900 million in equity and debt and is valued at more than $1 billion, announced that it would acquire Elevate Brands, based out of Austin and NYC. Elevate, like SellerX, is in the business of buying up smaller retailers that sell over marketplaces like Amazon, but it’s a fair bit smaller, with some $250 million raised to date.

The companies are not disclosing the value of the deal but note that it’s an all-share deal, and it will also include a new investment of €60 million+ ($64 million) in the combined entity from existing SellerX investors Sofina as the lead, plus L Catterton, Cherry Ventures, Felix Capital, 83North, Upper90 and TRCM Fund also participating, along with an extended credit line from BlackRock and Victory Park Capital funds to buy up more companies.

The merged company will be called SellerX Group, led by SellerX’s current co-CEOs and co-founders Philipp Triebel and Malte Horeyseck (Elevate’s co-founders Ryan Gnesin, Jeremy Bell, Robert Bell respectively as president, head of M&A and global business development head.

It will have 80 Amazon-native brands and annual sales of €400 million ($426 million). The deal is expected to close by the end of next month (June).

Roll-up plays have a tendency to start all looking like each other — there are only so many variations on “economies of scale for Amazon merchants”, and for a few years, pre-2022, it looked like anyone building an aggregator startup could pick up $50 or $200 million to pursue the model.

It was for that reason that it only felt like a matter of time before the consolidators themselves would get consolidated. Now, that is what’s happening, and in a somewhat ironic twist, it has not taken long for these consolidations to follow a pattern, too. It was only a month ago that Razor Group, another aggregator based out of Berlin, also acquired another roll-up business, Stryze, and closed off a big fundraise at a $1.2 billion valuation.

“The natural route is consolidation, that is the path forward. We’re building a stronger company by joining forces. That is what you have seen and will see, and what we have executed,” Razor’s CEO and co-founder Tushar Ahluwalia told me at the time. “Plus M&A is very close to the DNA of this space.”

One point of differentiation among aggregators has been that some are aiming to build the technology stack that they are using both to source retailers to acquire, and to then run them more efficiently as combined businesses; and some are sourcing third-party software for these purposes, focusing instead on bringing management smarts into merging and running multiple e-commerce operations under one roof.

It sounds like SellerX placed itself in the former category, while Elevate identified with the latter more.

The new business will use SellerX’s technology platform, supply chain infrastructure and “proprietary warehouse operations,” and “internationalization capabilities” the company said, while “Elevate Brands will contribute its strong expertise in turning marketplace-native products into consumer brands sold across multiple channels.”

“Elevate Brands and SellerX are a perfect match: a strong cultural fit, a shared vision, and complementary capabilities,” Triebel said in a statement. “This acquisition combines our know-how and diversified portfolios of strong brands with a market-leading technology platform and strong operational infrastructure. By leveraging our combined strengths, I am convinced we are well-positioned to drive further consolidation in the industry.”

SellerX acquires US e-commerce roll-up rival Elevate Brands in an all-share deal, raises $64M+ by Ingrid Lunden originally published on TechCrunch


from https://ift.tt/vKapdgm
via Technews
Share:

Pluton Biosciences takes its carbon-fixing microbes to market with a fresh $16.6M

Pluton Biosciences is hard at work identifying beneficial microorganisms and putting them to work in agriculture, and just raised a $16.5 million A round to commercialize its most promising finds.

The company raised its $6.6 million seed round in 2021, and I reported then about its approach of identifying and isolating microbes and bacteria that perform useful work. Nature is pretty good at solving problems via billions of years of evolution, and there’s tremendous biodiversity in every scoop of soil or microbiome.

As cool as it is to isolate and study dozens of new-to-science micros, ultimately Pluton needed to choose one that worked as product, not just a project. They settled on what they call a “microbial cover crop” that captures and sequesters carbon and nitrogen in the soil. This process is not just good for the crops and the environment, but it’s potentially a very valuable market; Pivot Bio raised a monster $430 million in 2021 to commercialize a microbial alternative to nitrogen-filled fertilizer.

“We have highly repeatable lab and greenhouse data for several product candidates that sequester carbon and nitrogen from the atmosphere and place it in the soil. Early testing verified growth on soil during the target season on agricultural fields,” said CEO and CFO Elizabeth Gallegos in response to TechCrunch’s questions. Field testing this year should quantify the sequestration process and how the product  affects (positively, presumably) the soil itself.

Meanwhile R&D continues, along the carbon side and the more generally environment-friendly:

“We have discovered novel microbes that generate durable carbon that will persist in soils for decades and have the potential to be used in industries outside of agriculture,” Gallegos said. And one of their microbes led them to discover a new molecule that acts as an insecticide against a particular pest, the fall armyworm — that may be their next big product.

The company has doubled in size from 8 to 17 people, and expects to double again over the next year; the physical size of their office and lab was also doubled, and one expects it will continue to grow as well.

The new funding round was led by Illumina Ventures and RA Capital, which tend to focus on genomics, precision health, and other life science verticals. Fall Line Capital, The Grantham Foundation, First In Ventures, Wollemi, Radicle Growth, and iSelect also participated in the raise.

Pluton Biosciences takes its carbon-fixing microbes to market with a fresh $16.6M by Devin Coldewey originally published on TechCrunch


from https://ift.tt/YFTVrMX
via Technews
Share:

Instacart’s new feature lets you ‘favorite’ shoppers to have future orders fulfilled by them

Instacart announced today that it’s introducing a new feature that lets you “favorite” repeat shoppers you trust to have your future orders fulfilled by them. The company is testing the feature in the coming weeks with select customers in some regions, including New York City, Philadelphia, Dallas, Salt Lake City, Phoenix and others.

Select shoppers and customers will receive and email letting them know that they will be able to test the favorite shopper feature. Shoppers will be able to see which of their customers have favorited them and can remove them if they want to. Instacart plans to test the feature throughout the coming months.

Instacart's new favorite shopper feature

Image Credits: Instacart

“We often hear that both customers and shoppers would love the ability for a customer to schedule an order with a specific shopper after a great experience, and we’re thrilled to share that we will soon be testing this long-sought-after feature in a pilot experiment,” the company wrote in a blog post. “In addition to providing customers with the opportunity to place orders with even more confidence, this feature also gives shoppers a new tool to help them develop deeper connections with their customers and grow their businesses.”

If you come across a shopper that you like, maybe because they responded to messages quickly or ensured that your order was correct, the new feature lets you favorite them and schedule orders based on their availability. In the future, once you select all of the items for your order, you will have the option to select a new “schedule with favorite shopper” option on the checkout page.

It’s worth noting that Instacart isn’t the only delivery service to have a favorite shopper feature. Back in 2021, Shipt introduced a “Preferred Shoppers” feature that lets customers select shoppers for future orders.

Instacart’s new feature lets you ‘favorite’ shoppers to have future orders fulfilled by them by Aisha Malik originally published on TechCrunch


from https://ift.tt/z5Qvfuq
via Technews
Share:

OpenAI’s Altman and other AI giants back warning of advanced AI as ‘extinction’ risk

Make way for yet another headline-grabbing AI policy intervention: Hundreds of AI scientists, academics, tech CEOs and public figures — from OpenAI CEO Sam Altman and DeepMind CEO Demis Hassabis to veteran AI computer scientist Geoffrey Hinton, MIT’s Max Tegmark and Skype co-founder Jaan Tallinn to Grimes the musician and populist podcaster Sam Harris, to name a few — have added their names to a statement urging global attention on existential AI risk.

The statement, which is being hosted on the website of a San Francisco-based, privately-funded not-for-profit called the Center for AI Safety (CAIS), seeks to equate AI risk with the existential harms posed by nuclear apocalypse and calls for policymakers to focus their attention on mitigating what they claim is ‘doomsday’ extinction-level AI risk.

Here’s their (intentionally brief) statement in full:

Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.

Per a short explainer on CAIS’ website the statement has been kept “succinct” because those behind it are concerned to avoid their message about “some of advanced AI’s most severe risks” being drowned out by discussion of other “important and urgent risks from AI” which they nonetheless imply are getting in the way of discussion about extinction-level AI risk.

However we have actually heard the self-same concerns being voiced loudly and multiple times in recent months, as AI hype has surged off the back of expanded access to generative AI tools like OpenAI’s ChatGPT and DALL-E — leading to a surfeit of headline-grabbing discussion about the risk of “superintelligent” killer AIs. (Such as this one, from earlier this month, where statement-signatory Hinton warned of the “existential threat” of AI taking control. Or this one, from just last week, where Altman called for regulation to prevent AI destroying humanity.)

There was also the open letter signed by Elon Musk (and scores of others) back in March which called for a six-month pause on development of AI models more powerful than OpenAI’s GPT-4 to allow time for shared safety protocols to be devised and applied to advanced AI — warning over risks posed by “ever more powerful digital minds that no one — not even their creators — can understand, predict, or reliably control”.

So, in recent months, there has actually been a barrage of heavily publicized warnings over AI risks that don’t exist yet.

This drumbeat of hysterical headlines has arguably distracted attention from deeper scrutiny of existing harms. Such as the tools’ free use of copyrighted data to train AI systems without permission or consent (or payment); or the systematic scraping of online personal data in violation of people’s privacy; or the lack of transparency from AI giants vis-a-vis the data used to train these tools. Or, indeed, baked in flaws like disinformation (“hallucination”) and risks like bias (automated discrimination). Not to mention AI-driven spam!

It’s certainly notable that after a meeting last week between the UK prime minister and a number of major AI execs, including Altman and Hassabis, the government appears to be shifting tack on AI regulation — with a sudden keen in existential risk, per the Guardian’s reporting.

Talk of existential AI risk also distracts attention from problems related to market structure and dominance, as Jenna Burrell, director of research at Data & Society, pointed out in this recent Columbia Journalism Review article reviewing media coverage of ChatGPT — where she argued we need to move away from focusing on red herrings like AI’s potential “sentience” to covering how AI is further concentrating wealth and power.

So of course there are clear commercial motivates for AI giants to want to route regulatory attention into the far-flung theoretical future, with talk of an AI-driven doomsday — as a tactic to draw lawmakers’ minds away from more fundamental competition and antitrust considerations in the here and now. And data exploitation as a tool to concentrate market power is nothing new.

Certainly it speaks volumes about existing AI power structures that tech execs at AI giants including OpenAI, DeepMind, Stability AI and Anthropic are so happy to band and chatter together when it comes to publicly amplifying talk of existential AI risk. And how much more reticent to get together to discuss harms their tools can be seen causing right now.

OpenAI was a notable non-signatory to the aforementioned (Musk signed) open letter but a number of its employees are backing the CAIS-hosted statement (while Musk apparently is not). So the latest statement appears to offer an (unofficial) commercially self-serving reply by OpenAI (et al) to Musk’s earlier attempt to hijack the existential AI risk narrative in his own interests (which no longer favor OpenAI leading the AI charge).

Instead of the statement calling for a development pause, which would risk freezing OpenAI’s lead in the generative AI field, it lobbies policymakers to focus on risk mitigation — doing so while OpenAI is simultaneously crowdfunding efforts to shape “democratic processes for steering AI”, as Altman put it. So the company is actively positioning itself (and applying its investors’ wealth) to influence the shape of any future mitigation guardrails, alongside ongoing in-person lobbying efforts targeting international regulators.

 

Elsewhere, some signatories of the earlier letter have simply been happy to double up on another publicity opportunity — inking their name to both (hi Tristan Harris!).

But who is CAIS? There’s limited public information about the organization hosting this message. However it is certainly involved in lobbying policymakers, at its own admission. Its website says its mission is “to reduce societal-scale risks from AI” and claims it’s dedicated to encouraging research and field-building to this end, including funding research — as well as having a stated policy advocacy role.

An FAQ on the website offers limited information about who is financially backing it (saying its funded by private donations). While, in answer to an FAQ question asking “is CAIS an independent organization”, it offers a brief claim to be “serving the public interest”:

CAIS is a nonprofit organization entirely supported by private contributions. Our policies and research directions are not determined by individual donors, ensuring that our focus remains on serving the public interest.

We’ve reached out to CAIS with questions.

In a Twitter thread accompanying the launch of the statement, CAIS’ director, Dan Hendrycks, expands on the aforementioned statement explainer — naming “systemic bias, misinformation, malicious use, cyberattacks, and weaponization” as examples of “important and urgent risks from AI… not just the risk of extinction”.

“These are all important risks that need to be addressed,” he also suggests, downplaying concerns policymakers have limited bandwidth to address AI harms by arguing: “Societies can manage multiple risks at once; it’s not ‘either/or’ but ‘yes/and.’ From a risk management perspective, just as it would be reckless to exclusively prioritize present harms, it would also be reckless to ignore them as well.”

The thread also credits David Krueger, an assistant professor of Computer Science at the University of Cambridge, with coming up with the idea to have a single-sentence statement about AI risk and “jointly” helping with its development.

OpenAI’s Altman and other AI giants back warning of advanced AI as ‘extinction’ risk by Natasha Lomas originally published on TechCrunch


from https://ift.tt/MV8elNg
via Technews
Share:

Portal’s Mac app helps users focus with immersive backgrounds and audio

Productivity-enhancing app Portal has launched a Mac app. The company helps users regain focus and become more productive with immersive backgrounds and natural sounds.

The service has been available through an iOS app since 2019 — the desktop and mobile apps have similar objectives. The company said that the app has attracted more than a million downloads. With the native Mac app, which is compatible with Apple silicon, developers aim to cater to professionals working from both their home and an office.

Portal for Mac has more than 80 environments to choose from, which include high-quality looping videos captured by the company’s own team. The startup said that it has used 12K cameras to record some of the most scenic and peaceful surroundings in the world.

Image Credits: Portal.app

You can choose to have looping videos on the desktop or turn off motion. In that case, you get a more traditional still background image. Additionally, you can turn off desktop icons so that you can just look at the landscape.

The environment also comes with natural background sounds that are compatible with spatial audio. Plus, Portal supports integrations with smart lights such as Philips Hue and Nanoleaf to match up the lights to the environment you are setting.

Users can control the sound and motion or change the background through the menu bar icon too. Given that Portal is a native Mac app, it integrates with Siri Shortcuts as well.

Image Credits: Portal.app

Portal’s app is available to everyone through the Mac App Store with a seven-day trial. After that users need to pay $9.99 per month or $49.99 per year. Alternatively, they can also buy a lifetime license with a one-time fee of $249.99.

Portal’s Mac app helps users focus with immersive backgrounds and audio by Ivan Mehta originally published on TechCrunch


from https://ift.tt/ujCYMrO
via Technews
Share:

Wellplaece wants to put a smile on dentists’ face with new supply procurement marketplace

There are nearly 186,000 dentist businesses in the U.S., and at any given time, they are replenishing supplies. The process of doing so was traditionally fragmented, with offices relying on between three and seven suppliers, on average.

This means someone sits at the computer with tabs open to vendor websites, going back and forth between them, to compare prices and find the best deal. Adding to the issue in recent years was the global pandemic which strained the supply chain and made finding everyday items, like gloves and masks, more difficult.

Caen Contee, founder of global micromobility startup Lime, told TechCrunch that this is an avoidable problem. He teamed up with software engineer Ivan Bertona to develop Wellplaece, an automated, multi-vendor supply product purchasing platform for dental offices.

“Think of the technology like an extraction layer on top of procurement for dental practices,” Bertona said in an interview. “Around optimization, we can also learn about customer habits and what products they might be willing to substitute with equivalent but cheaper alternatives versus which ones they wouldn’t be looking at.”

Here’s how it works: Wellplaece’s proprietary technology digests a client’s ordering data, taking into account specific needs, and then enables the customer to make those specific purchasing behaviors across a large network of suppliers via one shopping cart.

Caen Contee, Wellplaece

Wellplaece co-founder Caen Contee. (Image credit: Wellplaece)

Billing occurs after orders are successfully processed, placed and confirmed delivered. Meanwhile, Wellplaece manages the order and logistics as well as returns and chargebacks as needed. The platform is free for practices to use and Wellplaece takes a portion of the sales it drives to the supplier.

Wellplaece is among other startups, like bttn, helping medical professionals get the supplies they need. However, trying to build something like this even three years ago would have been difficult had it not been for a few factors, including the e-commerce push that resulted from the pandemic, Contee said.

Distributors were trying to sell online directly with an account and with a customer portal. That led to online catalogs where data could be pulled.

“All those things, to me, have created this perfect storm,” Contee said.

Meanwhile, the company began taking orders from a small group of practices in November 2022 and started accepting more practices in a private beta this month. This first cohort of dental practices have already seen 20% to 40% savings per order, Contee said.

In addition, the marketplace has already amassed over 700,000 products across its network of distributors and manufacturers and is poised to onboard over 100 new locations by the end of this year.

Wellplaece’s launch is buoyed by a recently closed $3.5 million seed round, co-led by Eniac Ventures and Bee Partners, with participation from Erik Anderson, co-founder and CEO WestRiver Group; Haroon Mokhtarzada, co-founder and CEO of TrueBill and RocketMoney; Andy Oreffice, former CCO of Affordable Healthcare; and entrepreneur Francis Hellyer.

In total, the company raised $5.5 million that is being deployed into scaling up so it can accommodate thousands of practices waiting to utilize the platform over the next few years.

“A typical practice with two dentists will spend somewhere in the range of $4,000 to $5,000 on supplies monthly, and the least efficient ones may spend up to 10% of their overall budget on supplies,” Contee said. “The most efficient practices may get it down to around 4%. We want to immediately help practices get closer to that more competitive best-in-class range for what they should be spending on their total costs.”

Wellplaece wants to put a smile on dentists’ face with new supply procurement marketplace by Christine Hall originally published on TechCrunch


from https://ift.tt/z8yoMqb
via Technews
Share:

Amazon is testing dine-in payments in India

After shutting down its food delivery business last year, Amazon India is now experimenting with dine-in payments. The company has initiated a limited introduction of bill payments at restaurants using Amazon Pay.

The facility is currently active in select areas of Bengaluru with a limited set of restaurants. Users can head to Amazon Pay > Dining in the Amazon app to make payments using credit/debit cards, net banking, UPI, or Amazon Pay Later. At the moment, Amazon India is offering discounts on bill payments at almost all listed restaurants.

Image Credits: Amazon

It’s not clear if the e-commerce group is testing this in any other city. Amazon India spokespeople did not respond to a request for comment.

Image Credits: Amazon

Food delivery bigwigs Zomato and Swiggy both offer in-restaurant payments and discounts as they attempt to attract more customers. Earlier this month, Zomato launched its own UPI service in partnership with the ICICI bank for quicker checkout and bill payment.

The National Restaurant Association of India, a consortium in the hospitality sector, last year warned against dining payment products from food delivery firms in an advisory to its members.

Amazon’s new experiment is another attempt at finding ways to engage customers in India. It is facing challenges in India and has struggled to make inroads into smaller towns in the country, according to a report from investment firm Sanford C. Bernstein. The e-commerce giant insists that 85% of its customers are from tier 2/3 cities/towns.

Bernstein’s report also noted that the company is facing a tough regulatory environment and as a result falling behind Walmart-backed Flipkart. Notably, Amazon omitted India mentions for the first time since 2014 from its Q1 2023 results.

Earlier this year, Amazon joined Open Network for Digital Commerce, an initiative set up by India’s ecommerce ministry, in limited capacity to create an “interoperable” network for sellers. ONCD’s aim is to let retailers join a digital network that doesn’t rely on central marketplaces like Amazon and Flipkart.

Amazon is testing dine-in payments in India by Ivan Mehta originally published on TechCrunch


from https://ift.tt/nWqYBZP
via Technews
Share:

A popular Android app began secretly spying on its users months after it was approved on Google Play

A cybersecurity firm says a popular Android screen recording app that racked up tens of thousands of downloads on Google’s app store subsequently began spying on its users, including by stealing microphone recordings and other documents from the user’s phone.

Research by ESET found that the Android app, “iRecorder — Screen Recorder,” introduced the malicious code as an app update almost a year after it was first listed on Google Play. The code, according to ESET, allowed the app to stealthily upload a minute of ambient audio from the device’s microphone every 15 minutes, as well as exfiltrate documents, web pages and media files from the user’s phone.

The app is no longer listed in Google Play. If you have installed the app, you should delete it from your device. By the time the malicious app was pulled from the app store, it had racked up more than 50,000 downloads.

ESET is calling the malicious code AhRat, a customized version of an open-source remote access trojan called AhMyth. Remote access trojans (or RATs) take advantage of broad access to a victim’s device and can often include remote control, but also function similarly to spyware and stalkerware.

A screenshot of iRecorder, the affected app, in Google Play as it was cached in the Internet Archive in 2022.

A screenshot of iRecorder listed in Google Play as it was cached in the Internet Archive in 2022. Image Credits: TechCrunch (screenshot)

Lukas Stefanko, a security researcher at ESET who discovered the malware, said in a blog post that the iRecorder app contained no malicious features when it first launched in September 2021.

Once the malicious AhRat code was pushed as an app update to existing users (and new users who would download the app directly from Google Play), the app began stealthily accessing the user’s microphone and uploading the user’s phone data to a server controlled by the malware’s operator. Stefanko said that the audio recording “fit within the already defined app permissions model,” given that the app was by nature designed to capture the device’s screen recordings and would ask to be granted access to the device’s microphone.

It’s not clear who planted the malicious code — whether the developer or by someone else — or for what reason. TechCrunch emailed the developer’s email address that was on the app’s listing before it was pulled, but has not yet heard back.

Stefanko said the malicious code is likely part of a wider espionage campaign — where hackers work to collect information on targets of their choosing — sometimes on behalf of governments or for financially motivated reasons. He said it was “rare for a developer to upload a legitimate app, wait almost a year, and then update it with malicious code.”

It’s not uncommon for bad apps to slip into the app stores, nor is it the first time AhMyth has crept its way into Google Play. Both Google and Apple screen apps for malware before listing them for download, and sometimes act proactively to pull apps when they might put users at risk. Last year, Google said it prevented more than 1.4 million privacy-violating apps from reaching Google Play.

A popular Android app began secretly spying on its users months after it was approved on Google Play by Zack Whittaker originally published on TechCrunch


from https://ift.tt/c2o1a54
via Technews
Share:

Max Q: Galactic

Hello and welcome back to Max Q! Happy Memorial Day everyone.

In this issue:

  • Astranis’ novel approach to GEO satellites
  • Virgin Galactic’s return to the skies
  • News from SpaceX, and more

Astranis’ novel approach to internet satellites is starting to pay off

Astranis, a satellite internet startup based in San Francisco, said Wednesday that its first spacecraft completed a milestone test and will start bringing broadband access to rural Alaskans as soon as mid-June.

It’s a major step for the company, which was founded in 2015 by John Gedmark and Ryan McLinko. By taking a first principles approach to satellite development, the pair bet that they could make a smaller, cheaper spacecraft for geosynchronous orbit — the orbit farthest from Earth and arguably the most inhospitable — and use them to bring internet to millions, or even billions, of people around the globe.

Their bet is paying off: The company’s first satellite, Arcturus, launched on a Falcon Heavy at the end of April. Within less than two minutes after separating from the rocket’s upper stage, the spacecraft started sending telemetry and tracking data to Astranis engineers. From there, the satellite connected to an internet gateway in Utah and communicated with multiple user terminals in Alaska for the first time.

astranis team

Image Credits: Astranis

Following successful mission, Virgin Galactic targeting June for first commercial spaceflight

Following a successful flight to the edge of space, space tourism company Virgin Galactic says it is ready to enter commercial service in June.

Virgin Galactic’s aircraft, VMS Eve, departed the New Mexico launch site carrying a crew of six (plus two aircraft pilots) at around 9:15 a.m. MT. The VSS Unity spaceplane dropped from the wing of the jet a little over an hour later, taking off to suborbital space at an altitude of 44,500 feet. The entire mission lasted around 90 minutes.

The mission, called Unity 25, concludes a nearly two-year pause in operations for the company. That last flight, which took place in June 2021, also took six people to suborbital space, including company founder billionaire Richard Branson. While Virgin Galactic did not broadcast the Unity 25 mission, the company kept followers updated on social media. NASA Spaceflight, a private news website with massive followings on YouTube and Twitter, unofficially livestreamed the flight.

Image Credits: Virgin Galactic

More news from TC and beyond

  • Fleet Space raised $33 million to grow its space-based mineral prospecting business. (SpaceNews)
  • Gitai, a Tokyo-based startup, wants to use robots as the labor force for the moon and Mars. (TechCrunch)
  • NASA is still working on construction of Mobile Launcher 2 for the next Space Launch System mission (Artemis II), with steel arriving at Kennedy Space Center. (Bechtel)
  • NASA’s Office of the Inspector General found dismaying cost overruns with the Artemis program, and in particular the development of the Space Launch System and the RS-25 rocket engines. (OIG)
  • Satellite Vu, a thermal imaging startup, closed a new tranche of funding ahead of its first launch. (TechCrunch)
  • SpaceX will join the U.S. Federal Aviation Administration as a co-defendant in a lawsuit filed against the regulator over environmental effects of the Starship launch program. (CNBC)
  • South Korea launched a domestically built rocket to space. (Reuters)
  • SkyFi lets anyone order satellite imagery from their smartphone. (TechCrunch)
  • The Spaceport Company demonstrated the potential for off-shore rocket launches in partnership with Evolution Space. (Evolution)
  • TRL11 closed pre-seed funding to further develop video solutions for the space environment. (TRL11)
  • Ursa Major won a contract with the U.S. Air Force to continue the development of two massive engines, one for space launch and one for hypersonics. (DefenseNews)
  • Virgin Orbit’s launch business was sold for parts to Rocket Lab, Stratolaunch and Vast. (TechCrunch)

 

Max Q is brought to you by me, Aria Alamalhodaei. If you enjoy reading Max Q, consider forwarding it to a friend. 

Max Q: Galactic by Aria Alamalhodaei originally published on TechCrunch


from https://ift.tt/TBlHKnW
via Technews
Share:

Popular Post

Search This Blog

Powered by Blogger.

Labels

Labels

Most view post

Facebook Page

Recent Posts

Blog Archive