Tumgik
#Iphone app development company in california
bitcot-technologies · 2 years
Text
iPhone app development company in San Diego, California.
Tumblr media
Have an idea for an iphone app? Have you always wanted to build your own iphone app that best suits your business needs? You're in luck! We will help you develop an awesome, high-quality iphone app on time. We have the best iphone app development company in San Diego, California. BitCot is one of the best iphone app development companies in San Diego, USA. Our mission is to help you with all your ios app creation needs through the entire development process and support you with customizing your app for maximum usability, scalability, performance and ultimately profitability. We will help you build your iphone app from scratch using our proven methods and development team. Our team of expert developers and designers are dedicated to deliver the best possible IOS app to fit your business needs. bitCot is one of the best iphone app development companies in san diego
, USA
1 note · View note
data-titan · 9 months
Text
Data Titan is a premier mobile app development company in Glendale, Arizona. We offer the best mobile application development services for iPhone and Android in Phoenix, California and Nevada.
0 notes
perfectiongeeks · 1 year
Text
Ethereum App Development Company | Perfectiongeeks
PerfectionGeeks has a team of highly qualified and skilled Ethereum developers with years of experience when it comes to developing business solutions with the infusion of Ethereum blockchain. We will help you with Ethereum D-app which will allow you to serve your customers as per the specific purpose. These applications are exclusively developed upon the Blockchain network. With the use of trending tools and features of emerging technology, PerfectionGeeks help clients to develop robust Ethereum blockchain applications which will boost their business processes.
Visit us:
0 notes
mariacallous · 3 months
Text
Fortnite maker Epic Games publicly lashed out at Apple on Friday, after its latest proposal for a rival iOS App Store was rejected by the smartphone maker. The company said on X that this rejection was triggered after Apple argued the design of Epic’s app store too closely resembled its own.
This decision follows Epic’s attempt to submit an iOS version of the Epic Games Store last week, a move that would make it possible for iPhone and iPad users to download games onto their device without visiting Apple’s App Store.
“Apple's rejection is arbitrary, obstructive, and in violation of the DMA [Digital Markets Act],” Epic said on Friday in a statement released on X, adding that the company had already shared its concerns with the European Commission. Apple has rejected Epic’s Game Store notarization submission—a process where apps are submitted to the company for review—twice in the past week, Epic spokesperson Elka Looks told WIRED.
The case is part of a wider battle over who gets to control the apps available to hundreds of millions of people. In a blow to the U giant, Apple has been compelled by the Digital Markets Act, new EU rules, to allow alternatives to its own brand app store on European iPhones and iPads since March.
“Apple has rejected our Epic Games Store notarization submission twice now, claiming the design and position of Epic’s ‘Install’ button is too similar to Apple’s ‘Get’ button and that our ‘In-app purchases’ label is too similar to the App Store’s ‘In-App Purchases’ label,” the company said.
Epic explained its naming conventions echoed Apple’s because it was “trying to build a store that mobile users can easily understand.” Apple did not reply to WIRED’s request for comment.
There are more than 100 million people who use Apple’s App Store in the EU. The launch of the Epic Games Store would, for the first time, give those users a choice of where they want to download apps.
That moment is eagerly awaited by lawmakers who argue that the tech giants are repressing competition by blocking rivals’ access to their users. “The launch of an alternative app store within the Apple system would create a huge proof, that the DMA can stimulate competition and thereby bring down prices for consumers,” Andreas Schwab, a member of the European Parliament who helped negotiate the DMA, told WIRED.
Epic and Apple are longtime rivals. In 2020, Epic filed a lawsuit against Apple in California, arguing the company’s grip over the iOS market was “unreasonable and unlawful.” Apple came out of the US case (mostly) victorious. But in Europe, Epic has become part of a vocal community of developers who are seething about the power they perceive Apple’s App Store to wield over their businesses and the commission the company charges on in-app purchases.
“Apple holds app providers ransom like the Mafia,” Matthias Pfau, CEO and cofounder of Tuta, an encrypted email provider, told WIRED earlier this year. Epic’s alternative app store proposal is a test case for the possibility of other alternative app stores that could reshape the relationship between Apple and developers.
The Epic Games Store is already available on PC, Mac, and Android, but not on Google Play. Now, the company plans to continue seeking approval for its iOS version, it said: “Barring further roadblocks from Apple, we remain ready to launch in the Epic Games Store … on iOS in the EU in the next couple of months.”
5 notes · View notes
lizdana06 · 8 months
Text
Apple Inc.
Apple Inc. was founded by Steve Jobs, Steve Wozniak, and Ronald Wayne on April 1, 1976, in Cupertino, California. The company's initial goal was to develop and sell personal computers. Over the years, Apple has evolved into one of the most influential and successful technology companies globally. With a commitment to innovation and design, Apple has expanded its product range to include iconic devices such as the iPhone, iPad, Macintosh computers, and Apple Watch. The company's present perfect journey has seen it revolutionize the consumer electronics and software industries, setting new standards for user experience and technological integration.
Important Achievements:
• Apple has revolutionized its product lineup by transitioning from Intel processors to its custom-designed Apple Silicon chips. This strategic move, initiated in 2020, has allowed Apple to exert greater control over the integration of hardware and software, resulting in impressive performance improvements and energy efficiency across Mac devices.
• Implementation of App Tracking Transparency: In 2021, Apple implemented App Tracking Transparency in iOS 14.5, a significant achievement that reflects the company's unwavering commitment to user privacy. This feature empowers users to control and manage which apps can track their activities across various platforms, setting a new standard for digital privacy within the technology industry.
Recent Achievements:
• Apple has been advancing its Apple Silicon technology, with ongoing efforts in designing and optimizing custom processors. The company has been progressively enhancing performance, power efficiency, and overall user experience through iterative updates, reinforcing its commitment to pushing the boundaries of hardware innovation.
• Iterating iOS with Privacy Enhancements: In recent years, Apple has been consistently iterating its iOS operating system with a focus on privacy enhancements. The company has been actively refining and introducing features that give users more control over their personal data, demonstrating an ongoing commitment to fostering a secure and private digital environment.
López moreno Yareni Dessiré.
Téllez Vazquez Claudia Tonantzin.
Torres Rodríguez Lizeth Danae.
2 notes · View notes
douxlen · 2 months
Text
Mark Zuckerberg Just Intensified the Battle for AI’s Future
New Post has been published on https://douxle.com/2024/08/10/mark-zuckerberg-just-intensified-the-battle-for-ais-future/
Mark Zuckerberg Just Intensified the Battle for AI’s Future
Tumblr media Tumblr media
The tech industry is currently embroiled in a heated debate over the future of AI: should powerful systems be open-source and freely accessible, or closed and tightly monitored for dangers?
On Tuesday, Meta CEO Mark Zuckerberg fired a salvo into this ongoing battle, publishing not just a new series of powerful AI models, but also a manifesto forcefully advocating for the open-source approach. The document, which was widely praised by venture capitalists and tech leaders like Elon Musk and Jack Dorsey, serves as both a philosophical treatise and a rallying cry for proponents of open-source AI development. It arrives as intensifying global efforts to regulate AI have galvanized resistance from open-source advocates, who see some of those potential laws as threats to innovation and accessibility.
At the heart of Meta’s announcement on Tuesday was the release of its latest generation of Llama large language models, the company’s answer to ChatGPT. The biggest of these new models, Meta claims, is the first open-source large language model to reach the so-called “frontier” of AI capabilities.
Meta has taken on a very different strategy with AI compared to its competitors OpenAI, Google DeepMind and Anthropic. Those companies sell access to their AIs through web browsers or interfaces known as APIs, a strategy that allows them to protect their intellectual property, monitor the use of their models, and bar bad actors from using them. By contrast, Meta has chosen to open-source the “weights,” or the underlying neural networks, of its Llama models—meaning they can be freely downloaded by anybody and run on their own machines. That strategy has put Meta’s competitors under financial pressure, and has won it many fans in the software world. But Meta’s strategy has also been criticized by many in the field of AI safety, who warn that open-sourcing powerful AI models has already led to societal harms like deepfakes, and could in future open a Pandora’s box of worse dangers.
In his manifesto, Zuckerberg argues most of those concerns are unfounded and frames Meta’s strategy as a democratizing force in AI development. “Open-source will ensure that more people around the world have access to the benefits and opportunities of AI, that power isn’t concentrated in the hands of a small number of companies, and that the technology can be deployed more evenly and safely across society,” he writes. “It will make the world more prosperous and safer.” 
But while Zuckerberg’s letter presents Meta as on the side of progress, it is also a deft political move. Recent polling suggests that the American public would welcome laws that restrict the development of potentially-dangerous AI, even if it means hampering some innovation. And several pieces of AI legislation around the world, including the SB1047 bill in California, and the ENFORCE Act in Washington, D.C., would place limits on the kinds of systems that companies like Meta can open-source, due to safety concerns. Many of the venture capitalists and tech CEOs who celebrated Zuckerberg’s letter after its publication have in recent weeks mounted a growing campaign to shape public opinion against regulations that would constrain open-source AI releases. “This letter is part of a broader trend of some Silicon Valley CEOs and venture capitalists refusing to take responsibility for damages their AI technology may cause,” says Andrea Miotti, the executive director of AI safety group Control AI. “Including catastrophic outcomes.”
The philosophical underpinnings for Zuckerberg’s commitment to open-source, he writes, stem from his company’s long struggle against Apple, which via its iPhone operating system constrains what Meta can build, and which via its App Store takes a cut of Meta’s revenue. He argues that building an open ecosystem—in which Meta’s models become the industry standard due to their customizability and lack of constraints—will benefit both Meta and those who rely on its models, harming only rent-seeking companies who aim to lock in users. (Critics point out, however, that the Llama models, while more accessible than their competitors, still come with usage restrictions that fall short of true open-source principles.) Zuckerberg also argues that closed AI providers have a business model that relies on selling access to their systems—and suggests that their concerns about the dangers of open-source, including lobbying governments against it, may stem from this conflict of interest.
Addressing worries about safety, Zuckerberg writes that open-source AI will be better at addressing “unintentional” types of harm than the closed alternative, due to the nature of transparent systems being more open to scrutiny and improvement. “Historically, open-source software has been more secure for this reason,” he writes. As for intentional harm, like misuse by bad actors, Zuckerberg argues that “large-scale actors” with high compute resources, like companies and governments, will be able to use their own AI to police “less sophisticated actors” misusing open-source systems. “As long as everyone has access to similar generations of models—which open-source promotes—then governments and institutions with more compute resources will be able to check bad actors with less compute,” he writes.
But “not all ‘large actors’ are benevolent,” says Hamza Tariq Chaudhry, a U.S. policy specialist at the Future of Life Institute, a nonprofit focused on AI risk. “The most authoritarian states will likely repurpose models like Llama to perpetuate their power and commit injustices.” Chaudhry, who is originally from Pakistan, adds: “Coming from the Global South, I am acutely aware that AI-powered cyberattacks, disinformation campaigns and other harms pose a much greater danger to countries with nascent institutions and severe resource constraints, far away from Silicon Valley.”
Zuckerberg’s argument also doesn’t address a central worry held by many people concerned with AI safety: the risk that AI could create an “offense-defense asymmetry,” or in other words strengthen attackers while doing little to strengthen defenders. “Zuckerberg’s statements showcase a concerning disregard for basic security in Meta’s approach to AI,” says Miotti, the director of Control AI. “When dealing with catastrophic dangers, it’s a simple fact that offense needs only to get lucky once, but defense needs to get lucky every time. A virus can spread and kill in days, while deploying a treatment can take years.”
Later in his letter, Zuckerberg addresses other worries that open-source AI will allow China to gain access to the most powerful AI models, potentially harming U.S. national security interests. He says he believes that closing off models “will not work and will only disadvantage the U.S. and its allies.” China is good at espionage, he argues, adding that “most tech companies are far from” the level of security that would prevent China from being able to steal advanced AI model weights. “It seems most likely that a world of only closed models results in a small number of big companies plus our geopolitical adversaries having access to leading models, while startups, universities, and small businesses miss out on opportunities,” he writes. “Plus, constraining American innovation to closed development increases the chance that we don’t lead at all.”
Miotti is unimpressed by the argument. “Zuckerberg admits that advanced AI technology is easily stolen by hostile actors,” he says, “but his solution is to just give it to them for free.”
0 notes
sa7abnews · 2 months
Text
Mark Zuckerberg Just Intensified the Battle for AI’s Future
New Post has been published on https://sa7ab.info/2024/08/09/mark-zuckerberg-just-intensified-the-battle-for-ais-future/
Mark Zuckerberg Just Intensified the Battle for AI’s Future
Tumblr media Tumblr media
The tech industry is currently embroiled in a heated debate over the future of AI: should powerful systems be open-source and freely accessible, or closed and tightly monitored for dangers?
On Tuesday, Meta CEO Mark Zuckerberg fired a salvo into this ongoing battle, publishing not just a new series of powerful AI models, but also a manifesto forcefully advocating for the open-source approach. The document, which was widely praised by venture capitalists and tech leaders like Elon Musk and Jack Dorsey, serves as both a philosophical treatise and a rallying cry for proponents of open-source AI development. It arrives as intensifying global efforts to regulate AI have galvanized resistance from open-source advocates, who see some of those potential laws as threats to innovation and accessibility.
At the heart of Meta’s announcement on Tuesday was the release of its latest generation of Llama large language models, the company’s answer to ChatGPT. The biggest of these new models, Meta claims, is the first open-source large language model to reach the so-called “frontier” of AI capabilities.
Meta has taken on a very different strategy with AI compared to its competitors OpenAI, Google DeepMind and Anthropic. Those companies sell access to their AIs through web browsers or interfaces known as APIs, a strategy that allows them to protect their intellectual property, monitor the use of their models, and bar bad actors from using them. By contrast, Meta has chosen to open-source the “weights,” or the underlying neural networks, of its Llama models—meaning they can be freely downloaded by anybody and run on their own machines. That strategy has put Meta’s competitors under financial pressure, and has won it many fans in the software world. But Meta’s strategy has also been criticized by many in the field of AI safety, who warn that open-sourcing powerful AI models has already led to societal harms like deepfakes, and could in future open a Pandora’s box of worse dangers.
In his manifesto, Zuckerberg argues most of those concerns are unfounded and frames Meta’s strategy as a democratizing force in AI development. “Open-source will ensure that more people around the world have access to the benefits and opportunities of AI, that power isn’t concentrated in the hands of a small number of companies, and that the technology can be deployed more evenly and safely across society,” he writes. “It will make the world more prosperous and safer.” 
But while Zuckerberg’s letter presents Meta as on the side of progress, it is also a deft political move. Recent polling suggests that the American public would welcome laws that restrict the development of potentially-dangerous AI, even if it means hampering some innovation. And several pieces of AI legislation around the world, including the SB1047 bill in California, and the ENFORCE Act in Washington, D.C., would place limits on the kinds of systems that companies like Meta can open-source, due to safety concerns. Many of the venture capitalists and tech CEOs who celebrated Zuckerberg’s letter after its publication have in recent weeks mounted a growing campaign to shape public opinion against regulations that would constrain open-source AI releases. “This letter is part of a broader trend of some Silicon Valley CEOs and venture capitalists refusing to take responsibility for damages their AI technology may cause,” says Andrea Miotti, the executive director of AI safety group Control AI. “Including catastrophic outcomes.”
The philosophical underpinnings for Zuckerberg’s commitment to open-source, he writes, stem from his company’s long struggle against Apple, which via its iPhone operating system constrains what Meta can build, and which via its App Store takes a cut of Meta’s revenue. He argues that building an open ecosystem—in which Meta’s models become the industry standard due to their customizability and lack of constraints—will benefit both Meta and those who rely on its models, harming only rent-seeking companies who aim to lock in users. (Critics point out, however, that the Llama models, while more accessible than their competitors, still come with usage restrictions that fall short of true open-source principles.) Zuckerberg also argues that closed AI providers have a business model that relies on selling access to their systems—and suggests that their concerns about the dangers of open-source, including lobbying governments against it, may stem from this conflict of interest.
Addressing worries about safety, Zuckerberg writes that open-source AI will be better at addressing “unintentional” types of harm than the closed alternative, due to the nature of transparent systems being more open to scrutiny and improvement. “Historically, open-source software has been more secure for this reason,” he writes. As for intentional harm, like misuse by bad actors, Zuckerberg argues that “large-scale actors” with high compute resources, like companies and governments, will be able to use their own AI to police “less sophisticated actors” misusing open-source systems. “As long as everyone has access to similar generations of models—which open-source promotes—then governments and institutions with more compute resources will be able to check bad actors with less compute,” he writes.
But “not all ‘large actors’ are benevolent,” says Hamza Tariq Chaudhry, a U.S. policy specialist at the Future of Life Institute, a nonprofit focused on AI risk. “The most authoritarian states will likely repurpose models like Llama to perpetuate their power and commit injustices.” Chaudhry, who is originally from Pakistan, adds: “Coming from the Global South, I am acutely aware that AI-powered cyberattacks, disinformation campaigns and other harms pose a much greater danger to countries with nascent institutions and severe resource constraints, far away from Silicon Valley.”
Zuckerberg’s argument also doesn’t address a central worry held by many people concerned with AI safety: the risk that AI could create an “offense-defense asymmetry,” or in other words strengthen attackers while doing little to strengthen defenders. “Zuckerberg’s statements showcase a concerning disregard for basic security in Meta’s approach to AI,” says Miotti, the director of Control AI. “When dealing with catastrophic dangers, it’s a simple fact that offense needs only to get lucky once, but defense needs to get lucky every time. A virus can spread and kill in days, while deploying a treatment can take years.”
Later in his letter, Zuckerberg addresses other worries that open-source AI will allow China to gain access to the most powerful AI models, potentially harming U.S. national security interests. He says he believes that closing off models “will not work and will only disadvantage the U.S. and its allies.” China is good at espionage, he argues, adding that “most tech companies are far from” the level of security that would prevent China from being able to steal advanced AI model weights. “It seems most likely that a world of only closed models results in a small number of big companies plus our geopolitical adversaries having access to leading models, while startups, universities, and small businesses miss out on opportunities,” he writes. “Plus, constraining American innovation to closed development increases the chance that we don’t lead at all.”
Miotti is unimpressed by the argument. “Zuckerberg admits that advanced AI technology is easily stolen by hostile actors,” he says, “but his solution is to just give it to them for free.”
0 notes
beardedmrbean · 6 months
Text
London — European Union regulators opened investigations into Apple, Google and Meta on Monday, the first cases under a sweeping new law designed to stop Big Tech companies from cornering digital markets. The European Commission, the 27-nation bloc's executive arm, said it was investigating the companies for "non-compliance" with the Digital Markets Act.
The Digital Markets Act that took effect earlier this month is a broad rulebook that targets Big Tech "gatekeeper" companies providing "core platform services." Those companies must comply with a set of do's and don'ts, under threat of hefty financial penalties or even breaking up businesses. The rules have the broad but vague goal of making digital markets "fairer" and "more contestable" by breaking up closed tech ecosystems that lock consumers into a single company's products or services.
The commission said in a press release that it "suspects that the measures put in place by these gatekeepers fall short of effective compliance of their obligations under the DMA."
It's looking into whether Google and Apple are fully complying with the DMA's rules requiring tech companies to allow app developers to direct users to offers available outside their app stores. The commission said it's concerned the two companies are imposing "various restrictions and limitations" including charging fees that prevent apps from freely promoting offers.
Google is also facing scrutiny for not complying with DMA provisions that prevent tech giants from giving preference to their own services over rivals. The commission said it is concerned Google's measures will result in third-party services listed on Google's search results page not being treated "in a fair and non-discriminatory manner."
Google said that it has made "significant changes" to the way its services operate in Europe to comply with the DMA.
"We will continue to defend our approach in the coming months," Google's director of competition, Oliver Bethell, said.
In December, it was revealed that Google had agreed to pay $700 million and make several other concessions to settle allegations brought in the U.S. that it had been stifling competition against its Android app store.
The European Commission has slapped Google with antitrust penalties several times already, including a record $5 billion fine levied in 2018 over the search engine's abuse of the market dominance of its Android mobile phone operating system.
The commission is also investigating whether Apple is doing enough to allow iPhone users to easily change web browsers.
Apple said it's confident that its plan complies with the DMA, and it will "continue to constructively engage with the European Commission as they conduct their investigations." The company said it has created a wide range of new developer capabilities, features, and tools to comply with the regulation.
The California company is facing a broad antitrust lawsuit in the U.S., meanwhile, where the Justice Department has alleged that Apple illegally engaged in anti-competitive behavior in an effort to build a "moat around its smartphone monopoly" and maximize its profits at the expense of consumers. Fifteen states and the District of Columbia have joined the suit as plaintiffs.
Apple has also previously fallen foul of the EU's regulators, with a first fine against the company imposed by the bloc only several weeks ago. In its first antitrust penalty against Apple, the European Commission fined the company almost $2 billion in early March for breaking its competition laws by unfairly favoring its own music streaming service over competitors'.
Meta, also no stranger to the wrath of European regulators, is being investigated by the commission over the option given to users to pay a monthly fee for ad-free versions of Facebook or Instagram, so they can avoid having their personal data used to target them with online ads.
"The Commission is concerned that the binary choice imposed by Meta's 'pay or consent' model may not provide a real alternative in case users do not consent, thereby not achieving the objective of preventing the accumulation of personal data by gatekeepers," it said.
Meta said in a prepared statement that, "Subscriptions as an alternative to advertising are a well-established business model across many industries, and we designed Subscription for No Ads to address several overlapping regulatory obligations, including the DMA. We will continue to engage constructively with the Commission."
The EU fined Meta $1.3 billion about one year ago and ordered it to stop transferring European users' personal information across the Atlantic by October, in the latest salvo in a decadelong case sparked by U.S. cybersnooping fears. Meta called that decision by the commission flawed, and vowed to fight the fine.
The commission said it aims to wrap up its latest investigations into the American tech behemoths within 12 months.
1 note · View note
clavaxtechno · 11 months
Text
Clavax is a leading iOS app development services company, offering top-notch solutions for businesses of all sizes. With a dedicated team of skilled developers, they create innovative and user-friendly iOS applications that drive growth and engagement. Trust Clavax to deliver exceptional iOS app development services tailored to meet your specific business needs.
0 notes
recentlyheardcom · 1 year
Text
By Juby Babu(Reuters) - Apple on Saturday said it has identified a few issues which can cause new iPhones to run warmer than expected, including a bug in the iOS 17 software which will be fixed in an upcoming update.After complaints that the new phones are getting very warm, Apple has said that the device may feel warmer in the first few days "after setting up or restoring the device because of increased background activity.""Another issue involves some recent updates to third-party apps that are causing them to overload the system," Apple said, adding that it is working with app developers on fixes that are in the process of being rolled out.The third-party apps causing the issue include game Asphalt 9; Meta's Instagram; and Uber, according to the company. Instagram already fixed the issue with its app on Sept. 27.The upcoming iOS 17 bug fix will not reduce performance to address the iPhone's temperature.The Cupertino, California-headquartered company said that the iPhone 15 Pro and Pro Max do not suffer from overheating due to the design, rather the new titanium shells result in improved heat dissipation compared to prior stainless steel models.Apple also said the issue is not a safety or injury risk, and will not impact the phone's long-term performance.(Reporting by Juby Babu in Bengaluru; Editing by Nick Zieminski)
0 notes
webnewsify1 · 1 year
Text
WWDC 2023: A Look at the New Features for Developers
Tumblr media
Apple's upcoming Worldwide Developers Conference is anticipated to be one of its most significant events yet, as the company may finally unveil its mixed reality headset after years of speculation and leaked information. This move would not only introduce Apple into a new product category but also showcase its efforts to demonstrate the value of investing in virtual reality.  Additionally, the conference is expected to include operating system updates, new apps and features, and potentially new hardware. We have compiled information on the main WWDC keynote and some of the anticipated Apple announcements, including when and how to watch them. Apple has announced that the primary WWDC keynote for this year will occur on Monday, June 5th, at 1PM ET / 10AM PT. The event will take place both digitally and in-person at Apple Park in Cupertino, California. Tim Cook, Apple CEO, is expected to begin the proceedings. Where can I watch WWDC? The WWDC keynote will be available for live streaming on Apple's website and YouTube channel. Once it goes live, the stream will also be embedded at the top of this article. If you are unable to watch it live, Apple will post a pre-recorded version on YouTube for later viewing. Now, let's delve into the major announcements that we anticipate Apple will make at WWDC. What’s next for Apple? Apple has many new Mac models in the works. Although it's uncertain if all of these will be revealed at the WWDC, it remains a possibility. Apart from the aforementioned MacBooks, several new products, including a Mac Pro with an in-house Apple chip, an updated 24-inch iMac, and two Mac Studio models, are being constructed.  Furthermore, we're keeping a close eye on Apple's AI developments. While there hasn't been much buzz regarding their AI pursuits, the company's job postings show a keen interest in hiring AI experts. In addition, ChatGPT has recently been banned for Apple staff due to data privacy worries, indicating that the corporation might be creating its own AI tool for workers, similar to Samsung.  Of course, we'll also be anticipating the launch of the iPhone 15 later this year. According to rumors, all models of the iPhone 15 will feature Dynamic Island (not just the Pro), as well as a USB-C port, a result of another EU regulation. However, we'll have to wait until September for that. Developers can look forward to updates to iOS, iPadOS, macOS, and more at the WWDC event. Rumors suggest that iOS 17 will bring various quality-of-life updates, including a Personal Voice tool and an iPhone lockscreen feature. iPadOS 17 may address technical issues with Stage Manager. While little is known about macOS 14 and tvOS 17, watchOS 10 is expected to receive a significant update with a new widget-heavy interface. Read the full article
0 notes
dxminds05 · 1 year
Text
Tumblr media
App development companies in USA
Here is the top app development company in USA. Follow the link to know more:
0 notes
mariacallous · 7 months
Text
Europe changed the rules of the internet this week when the Digital Markets Act took effect, holding the biggest tech companies to tough new standards. Now the world is waiting to see which giant will be first to fall foul of the law. One of the architects of the DMA says Apple is a strong candidate for the first formal investigation, describing the company as “low hanging fruit.”
Apple has faced intensifying pressure in recent years from competitors, regulators, and courts in both Europe and the US, over the restrictions it places on app-makers who must rely on its App Store to reach millions of users. Yesterday Apple terminated the developer account of Fornite publisher Epic Games which has challenged the company in US courts and recently announced its intention to launch a rival to the Apple App Store.
German MEP Andreas Schwab, who led the negotiations that finalized the DMA on behalf of the EU Parliament, says that makes Apple a likely first target for non-compliance. “[This] gives me a very clear expectation that they want to be the first,” he tells WIRED. “Apple’s approach is a bit weird on all this and therefore it's low hanging fruit.”
Schwab is not involved in enforcement of the DMA. That’s overseen by the European Commission, which has already demanded “further explanation” as to why Apple terminated Epic’s account and is evaluating whether this violates the DMA.
“Apple’s approach to the Digital Markets Act was guided by two simple goals: complying with the law and reducing the inevitable, increased risks the DMA creates for our EU users,” says the company in a statement sent to WIRED by Apple spokesperson Rob Saunders. Apple has said on its website that alternative app stores carry the risk of malware, illicit code and other harmful content.
The DMA’s rules that aim to “break open” tech platforms require Apple to allow iPhone users to download apps from places other than Apples’ official App Store. The Epic Games Store, announced in January, intended to be launched by the Fortnite-maker Epic, would have been the first alternative app store to take advantage of the new system.
Apple tells WIRED it had the right to terminate Epic’s accounts according to a 2021 California court ruling. Epic CEO Tim Sweeney has been a vocal critic of what he styles as Apple’s “app store monopoly” for years, although in January the US supreme court denied a request to hear the latest episode in a lengthy antitrust dispute between the two companies in a victory for the smartphone maker.
The DMA went into force at midnight on March 7 in Brussels—3 pm in Silicon Valley. From that moment, six of the world’s biggest tech companies—Apple, Alphabet, Meta, Amazon, Microsoft, and TikTok’s Beijing-based owner ByteDance—must comply with a suite of new rules designed to improve competition in digital markets.
In addition to Apple having to allow outside apps, Microsoft Windows will no longer have Microsoft-owned Bing as its default search tool; users of Meta’sWhatsApp will be able to communicate with people on rival messaging apps; and Google and Amazon will have to tweak their search results to create more room for rivals. Companies that don’t comply with the new rules can be fined up to 20 percent of their global turnover.
The new rules should cause the European internet to “change for the better,” says Schwab, a center-right MEP. “To allow more openness, more fairness, and most of all more innovation and therefore new services—that’s the idea.”
Schwab’s comments add to a recent chorus of criticism targeting Apple. The EU’s antitrust chief, Margarethe Vestager, told Bloomberg earlier this week that the DMA will initially focus on sorting out big tech’s app stores. “I think it’s important you can have more than one app store on your phone,” she said on Tuesday.
Following Apple’s removal of Epic, the tone in the hallways of the Commission had become more urgent. “Under the #DMA, there is no room for threats by gatekeepers to silence developers," said Thierry Breton, the EU's industry chief, on X on Thursday, apparently referring to allegations by Epic's Sweeney that Apple had blocked the company's account because of the CEO’s critical Tweets. “I have asked our services to look into Apple’s termination of Epic’s developer account as a matter of priority.”
Flexing the DMA’s powers on Apple’s App Store would advertise how the law can improve life online for the general public, Schwab says. “I think the App Store would be a good example to show what we want to achieve with the DMA,” he says. “They will just see more apps and they will like these apps.”
Giving people choice over where they get mobile apps by requiring Apple and Google to permit alternative app stores on devices is seen as a key pillar of the DMA. In addition to giving users more choice, app developers will also gain more opportunities to to innovate, increasing competition, says Schwab. “With alternative app stores we can make markets a bit broader.”
4 notes · View notes
insafmedia · 1 year
Link
0 notes
douxlen · 2 months
Text
Mark Zuckerberg Just Intensified the Battle for AI’s Future
New Post has been published on https://douxle.com/2024/08/10/mark-zuckerberg-just-intensified-the-battle-for-ais-future-2/
Mark Zuckerberg Just Intensified the Battle for AI’s Future
Tumblr media Tumblr media
The tech industry is currently embroiled in a heated debate over the future of AI: should powerful systems be open-source and freely accessible, or closed and tightly monitored for dangers?
On Tuesday, Meta CEO Mark Zuckerberg fired a salvo into this ongoing battle, publishing not just a new series of powerful AI models, but also a manifesto forcefully advocating for the open-source approach. The document, which was widely praised by venture capitalists and tech leaders like Elon Musk and Jack Dorsey, serves as both a philosophical treatise and a rallying cry for proponents of open-source AI development. It arrives as intensifying global efforts to regulate AI have galvanized resistance from open-source advocates, who see some of those potential laws as threats to innovation and accessibility.
At the heart of Meta’s announcement on Tuesday was the release of its latest generation of Llama large language models, the company’s answer to ChatGPT. The biggest of these new models, Meta claims, is the first open-source large language model to reach the so-called “frontier” of AI capabilities.
Meta has taken on a very different strategy with AI compared to its competitors OpenAI, Google DeepMind and Anthropic. Those companies sell access to their AIs through web browsers or interfaces known as APIs, a strategy that allows them to protect their intellectual property, monitor the use of their models, and bar bad actors from using them. By contrast, Meta has chosen to open-source the “weights,” or the underlying neural networks, of its Llama models—meaning they can be freely downloaded by anybody and run on their own machines. That strategy has put Meta’s competitors under financial pressure, and has won it many fans in the software world. But Meta’s strategy has also been criticized by many in the field of AI safety, who warn that open-sourcing powerful AI models has already led to societal harms like deepfakes, and could in future open a Pandora’s box of worse dangers.
In his manifesto, Zuckerberg argues most of those concerns are unfounded and frames Meta’s strategy as a democratizing force in AI development. “Open-source will ensure that more people around the world have access to the benefits and opportunities of AI, that power isn’t concentrated in the hands of a small number of companies, and that the technology can be deployed more evenly and safely across society,” he writes. “It will make the world more prosperous and safer.” 
But while Zuckerberg’s letter presents Meta as on the side of progress, it is also a deft political move. Recent polling suggests that the American public would welcome laws that restrict the development of potentially-dangerous AI, even if it means hampering some innovation. And several pieces of AI legislation around the world, including the SB1047 bill in California, and the ENFORCE Act in Washington, D.C., would place limits on the kinds of systems that companies like Meta can open-source, due to safety concerns. Many of the venture capitalists and tech CEOs who celebrated Zuckerberg’s letter after its publication have in recent weeks mounted a growing campaign to shape public opinion against regulations that would constrain open-source AI releases. “This letter is part of a broader trend of some Silicon Valley CEOs and venture capitalists refusing to take responsibility for damages their AI technology may cause,” says Andrea Miotti, the executive director of AI safety group Control AI. “Including catastrophic outcomes.”
The philosophical underpinnings for Zuckerberg’s commitment to open-source, he writes, stem from his company’s long struggle against Apple, which via its iPhone operating system constrains what Meta can build, and which via its App Store takes a cut of Meta’s revenue. He argues that building an open ecosystem—in which Meta’s models become the industry standard due to their customizability and lack of constraints—will benefit both Meta and those who rely on its models, harming only rent-seeking companies who aim to lock in users. (Critics point out, however, that the Llama models, while more accessible than their competitors, still come with usage restrictions that fall short of true open-source principles.) Zuckerberg also argues that closed AI providers have a business model that relies on selling access to their systems—and suggests that their concerns about the dangers of open-source, including lobbying governments against it, may stem from this conflict of interest.
Addressing worries about safety, Zuckerberg writes that open-source AI will be better at addressing “unintentional” types of harm than the closed alternative, due to the nature of transparent systems being more open to scrutiny and improvement. “Historically, open-source software has been more secure for this reason,” he writes. As for intentional harm, like misuse by bad actors, Zuckerberg argues that “large-scale actors” with high compute resources, like companies and governments, will be able to use their own AI to police “less sophisticated actors” misusing open-source systems. “As long as everyone has access to similar generations of models—which open-source promotes—then governments and institutions with more compute resources will be able to check bad actors with less compute,” he writes.
But “not all ‘large actors’ are benevolent,” says Hamza Tariq Chaudhry, a U.S. policy specialist at the Future of Life Institute, a nonprofit focused on AI risk. “The most authoritarian states will likely repurpose models like Llama to perpetuate their power and commit injustices.” Chaudhry, who is originally from Pakistan, adds: “Coming from the Global South, I am acutely aware that AI-powered cyberattacks, disinformation campaigns and other harms pose a much greater danger to countries with nascent institutions and severe resource constraints, far away from Silicon Valley.”
Zuckerberg’s argument also doesn’t address a central worry held by many people concerned with AI safety: the risk that AI could create an “offense-defense asymmetry,” or in other words strengthen attackers while doing little to strengthen defenders. “Zuckerberg’s statements showcase a concerning disregard for basic security in Meta’s approach to AI,” says Miotti, the director of Control AI. “When dealing with catastrophic dangers, it’s a simple fact that offense needs only to get lucky once, but defense needs to get lucky every time. A virus can spread and kill in days, while deploying a treatment can take years.”
Later in his letter, Zuckerberg addresses other worries that open-source AI will allow China to gain access to the most powerful AI models, potentially harming U.S. national security interests. He says he believes that closing off models “will not work and will only disadvantage the U.S. and its allies.” China is good at espionage, he argues, adding that “most tech companies are far from” the level of security that would prevent China from being able to steal advanced AI model weights. “It seems most likely that a world of only closed models results in a small number of big companies plus our geopolitical adversaries having access to leading models, while startups, universities, and small businesses miss out on opportunities,” he writes. “Plus, constraining American innovation to closed development increases the chance that we don’t lead at all.”
Miotti is unimpressed by the argument. “Zuckerberg admits that advanced AI technology is easily stolen by hostile actors,” he says, “but his solution is to just give it to them for free.”
0 notes
teqtopagency · 1 year
Text
Custom Mobile App Development Company in California
TEQTOP is one of the top Custom Mobile App Development Companies in California. Our team of skilled and experienced iOS app developers specializes in creating custom, high-quality apps for businesses of all sizes, including UX and UI design, implementation, QA, and integration of iPhone and iPad apps. We have adopted the latest technology and industry best practices to ensure app quality, functionality, visual appeal, and user-friendliness. Contact TEQTOP today to bring your iOS app idea to life.
Tumblr media
0 notes