Tech-News
OpenAI Codex: A Pair Programmer to Shape the Future Coding Paradigm
OpenAI, the company behind the phenomenal automated chatbot ChatGPT, launched its next-generation coding agent, Codex, on May 16, 2025. Designed to streamline several fundamental aspects of software programming, Codex will further dynamise the exponentially growing field of automated coding. Let’s look deeper at what Codex is and what it can do.
What is Codex?
OpenAI Codex is an AI model that translates natural language into computer code. Built on OpenAI 0.3 reasoning model technology and trained on vast codebases, it powers tools like GitHub Copilot, enabling users to write, understand, and automate code tasks across multiple programming languages using plain English instructions. Codex accesses GitHub repositories, collects inspiration from billions of lines of code already stored there, and uses its algorithms to come up with solutions to complex tasks.
Development History of OpenAI Codex
The invention of the Codex dates back to August 2021, right after OpenAI introduced the GPT-3, a neural prototype trained on text. To tune up the process through which GPT-3 generates results, developers added an extra model of 12 billion parameters and named it Codex. This initial version of Codex could write SQL queries, convert conversations into Python code, and build UI components.
Read more: Watch The Skies: World's First AI-dubbed Feature Film to Change The Way of Watching Movies
Later, throughout the period 2022 to 2023, Codex continued evolving in smaller versions and got integrated into GPT-4 and ChatGPT Pro. Recently, OpenAI has given Codex its environment and advanced coding functionalities. In doing so, the company has also shifted the reasoning model of Codex from GPT-3 to Codex-1.
What Can Codex Do?
The new Codex model is apt at executing multiple tasks related to coding. Here is a short glance:
Conversational Coding
Perhaps one of the most groundbreaking features of Codex is the ability to translate plain English into code. This allows users to instruct the AI with phrases and receive accurate, ready-to-run scripts across a range of programming languages.
Developing Software
OpenAI’s new code models are transforming the software landscape by allowing developers to generate functional code in seconds. With simple prompts, users can create entire programs without manually writing each line.
Read more: Basic to Flagship: Top 5 Official Smartphones Released in Bangladesh in 2025
Analysing Data
From spreadsheets to large CSV files, OpenAI’s code tools can now analyse, clean, and visualise data with minimal user input. Journalists, researchers, and business analysts can quickly generate charts or summaries by simply uploading a file and asking questions.
Educational Companion for Coding Learners
The models serve as a real-time tutor, capable of explaining code line by line, debugging errors, and answering conceptual questions. For students and self-learners, this offers an accessible alternative to traditional learning methods.
Accelerating AI-Powered Applications
Startups and enterprises alike are leveraging these tools to make prototypes of AI-driven apps without the need for large development teams, cutting costs and speeding up innovation cycles.
Common Issues and Concerns on the Rise
.
Accuracy and Reliability
While OpenAI's models can generate functional code with impressive speed, experts caution that the outputs are not always correct or optimal. In complex scenarios, the AI may produce code that looks fine but contains subtle bugs, inefficiencies, or security flaws. Relying on such code without proper review could pose risks, particularly in sensitive applications like healthcare, finance, or cybersecurity.
Read more: Best 10 Smartphones Releasing in May 2025
Security and Misuse Risks
OpenAI’s code generation capabilities could be exploited to create malicious software. Although safeguards are in place, the potential for misuse remains a real concern. Cybersecurity experts warn that making code generation more public at this scale could empower bad practitioners.
Programming Fields
As these tools improve, there's a growing debate about the impact on employment. While some argue that AI will complement human developers by automating repetitive tasks, others worry it could eventually reduce demand for entry-level programmers, especially in roles focused on routine coding or bug fixing.
Code Licensing Confusion
The use of AI-generated code raises legal questions around ownership and licensing. Developers and companies are seeking clarity on how such content can be safely used in commercial products.
Skill Dilution
Some educators and software veterans fear that easy access to AI-generated code may hinder learning. If new developers rely too heavily on tools like Codex, they may struggle to build a deep understanding of how software works. Over time, this could lead to a generation of coders with limited problem-solving skills or creative confidence.
Read more: How Does Fashion Waste Contribute to Environmental Issues?
Conclusion
OpenAI Codex brings a transformative shift in how we code, making programming faster and more open. While it offers immense potential to streamline development and learning, thoughtful use, ethical oversight, and continued human expertise are essential to harnessing its power responsibly in the evolving tech landscape.
2 hours ago
Trump signs orders to boost Nuclear Power and speed up project approvals
President Donald Trump signed a series of executive orders on Friday aimed at dramatically increasing the U.S. nuclear energy output—by up to four times over the next 25 years—a target energy experts say is highly unlikely.
The orders shift key decision-making powers from the Nuclear Regulatory Commission (NRC), the longstanding independent safety body, to the Department of Energy, allowing the energy secretary to fast-track approvals of advanced reactor designs and projects.
The move comes amid growing energy demand driven by artificial intelligence and the rapid expansion of data centers, which are putting immense pressure on the nation’s electric grid.
Interior Secretary Doug Burgum said, “What we do over the next five years with electricity will shape the next 50,” emphasizing the urgency of energy expansion to maintain a competitive edge with China in AI development.
Still, the goal of quadrupling nuclear output is seen as unrealistic. The U.S. has no operational next-generation reactors, and only two new reactors have been built in the past five decades—both plagued by years of delays and massive cost overruns.
Currently, the U.S. operates 94 nuclear reactors, providing about 19% of the country’s electricity. Fossil fuels account for around 60%, while renewables contribute about 21%, according to the Energy Information Administration.
Trump Promotes Nuclear at Oval Office Ceremony
During the signing event in the Oval Office, President Trump, flanked by energy industry leaders, called nuclear power “a hot industry” and pledged large-scale development. Burgum and others argued that excessive regulation has stifled nuclear innovation for decades.
“This marks the end of over 50 years of overregulation,” said Burgum, who heads Trump’s new Energy Dominance Council.
The orders propose restructuring the NRC to ensure faster project reviews, including setting an 18-month deadline for decisions. They also launch a pilot program to build three experimental reactors by July 4, 2026, and use the Defense Production Act to secure uranium and other critical fuels.
Additional directives include evaluating the reopening of closed nuclear plants and locating new reactors on federal and military land.
NRC spokesperson Scott Burnell said the agency is reviewing the orders and will comply with White House instructions.
Industry Reacts with Optimism
Jacob DeWitte, CEO of nuclear company Oklo, demonstrated nuclear efficiency to Trump by holding up a golf ball—symbolizing the amount of uranium needed to power a person’s lifetime energy use. “It doesn’t get any better than that,” he said. “Very exciting indeed,” Trump responded.
Although Trump has previously backed fossil fuel expansion, he praised nuclear energy for being safe and clean—though he didn’t mention its climate advantages. Nuclear energy produces no greenhouse gas emissions, but safety advocates note the risks of accidents, attacks, and unresolved nuclear waste storage.
The proposed NRC overhaul includes workforce reductions but stops short of removing the current NRC commissioners. The future of the agency’s current chair, David Wright, remains uncertain as his term ends in June.
Criticism from Safety Advocates and Former Officials
Critics say Trump’s orders could erode critical safety protections. Edwin Lyman of the Union of Concerned Scientists warned that bypassing or weakening the NRC could threaten public safety. “The U.S. nuclear industry will fail if safety isn’t prioritized,” he said.
Former NRC Chair Gregory Jaczko was even more blunt, likening the orders to “a guillotine to the nation’s nuclear safety system,” and accused Trump of making the industry more dangerous and unreliable.
Push for Innovation Amid Global Competition
Nations like Russia, China, and Canada are advancing rapidly in building next-generation reactors. In Canada, the first of four small modular reactors recently began construction. Meanwhile, the U.S. is working to modernize its own approval processes.
Isaiah Taylor, CEO of Valar Atomics, said excessive bureaucracy has slowed U.S. progress while global rivals move faster. He welcomed the new mandate for the Department of Energy to accelerate innovation.
The NRC is currently evaluating applications for small modular reactors, with plans for some to begin operating in the early 2030s. The agency expects its reviews to be completed in under three years.
Tori Shivanandan, COO of California-based Radiant Nuclear, called Trump’s executive orders a “watershed moment” for the U.S. nuclear industry, expressing confidence they will drive the sector’s success.
13 hours ago
Microsoft fires employee who interrupted CEO's speech to protest AI tech for Israeli military
Microsoft has fired an employee who interrupted a speech by CEO Satya Nadella to protest the company's work supplying the Israeli military with technology used for the war in Gaza.
Software engineer Joe Lopez could be heard shouting at Nadella in the opening minutes Monday of the tech giant's annual Build developer conference in Seattle before getting escorted out of the room. Lopez later sent a mass email to colleagues disputing the company's claims about how its Azure cloud computing platform is used in Gaza.
Lopez's outburst was the first of several pro-Palestinian disruptions at the event that drew thousands of software developers to the Seattle Convention Center. At least three talks by executives were disrupted, the company even briefly cut the audio of one livestreamed event. Protesters also gathered outside the venue.
Microsoft has previously fired employees who protested company events over its work in Israel, including at its 50th anniversary party in April.
Elon Musk, who's suing Microsoft, is also software giant's special guest in new Grok AI partnership
Microsoft acknowledged last week that it provided AI services to the Israeli military for the war in Gaza but said it had found no evidence to date that its Azure platform and AI technologies were used to target or harm people in Gaza.
The advocacy group No Azure for Apartheid, led by employees and ex-employees, says Lopez received a termination letter after his Monday protest but couldn't open it. The group also says the company has blocked internal emails that mention words including “Palestine” and “Gaza.”
Microsoft confirms supplying AI to Israeli military, denies use in Gaza attacks
Microsoft hasn't returned emailed requests for comment about its response to this week's protests. The four-day conference ends Thursday.
1 day ago
OpenAI taps Iconic iPhone designer Jony Ive for new AI hardware project in $6.5 billion deal
OpenAI, the company behind ChatGPT, has enlisted famed designer Jony Ive — best known for his role in creating Apple’s iPhone — to lead a major new AI hardware initiative.
As part of this move, OpenAI is acquiring io Products, a company Ive co-founded, in a deal worth approximately $6.5 billion. Ive gained worldwide recognition during his 27 years at Apple, where he played a key role in defining the design of iconic devices like the iPhone, especially in partnership with Steve Jobs. He left Apple in 2019.
Now, Ive is stepping into the forefront of the AI era. While OpenAI hasn’t disclosed exactly what the new product will be, analysts speculate it could involve a physical AI interface — possibly something like AI-integrated glasses, a robot, or even a vehicle — that brings generative AI out of the digital realm and into real-world devices.
OpenAI CEO Sam Altman has reportedly been collaborating with Ive and his design firm LoveFrom since 2023. In a letter posted on OpenAI’s website, Altman and Ive said their shared vision led to the formation of a new company to develop and manufacture a fresh generation of AI products.
That company, io, was officially incorporated in Delaware in late 2023 and registered in California in April 2024. OpenAI already held a 23% stake in io through an earlier partnership and will now acquire full control with a $5 billion equity purchase.
Ive will not join OpenAI as a staff member; instead, LoveFrom will remain independent while taking on major creative and design roles for both OpenAI and io. All three entities — OpenAI, LoveFrom, and io — are based in San Francisco.
Peter Welinder, a longtime OpenAI executive who has led its robotics and early hardware efforts, will head the new io division.
Altman, 40, is clearly aiming to replicate the kind of innovation that resulted from the collaboration between Jobs and Ive. When founding LoveFrom, Ive drew inspiration from Jobs’ belief in crafting products with deep care and humanity. His studio is located in a historically artistic part of San Francisco, once frequented by Beat Generation writers like Jack Kerouac and Allen Ginsberg.
OpenAI, just a couple of miles away, was originally launched as a nonprofit focused on safe AI research. Although it’s now deeply invested in commercial products like ChatGPT, it is still overseen by a nonprofit board. Altman recently announced that OpenAI will retain this governance model while exploring ways to better attract capital and pursue acquisitions — steps meant to keep the company competitive while staying true to its original mission.
It’s unclear whether Altman’s work with Ive began before or after his brief ousting from OpenAI in late 2023, which happened between io’s incorporation in Delaware and its later establishment in California.
2 days ago
Google rolls out 'AI Mode' in major search engine overhaul
Google has launched a new "AI Mode" in the U.S. as part of its ongoing revamp of search, aiming to reshape how users find information online. Announced at its annual developer conference, the new feature allows users to interact with the search engine conversationally, like speaking with an expert.
This move marks a rapid expansion just two and a half months after limited testing via Google's Labs. The company is also integrating its advanced Gemini 2.5 AI model into search algorithms and testing tools like automatic concert ticket purchasing and live video-based search.
Google previewed a new pair of Android XR smart glasses—featuring a built-in AI assistant and camera—signaling a return to the smart glasses market. Designed with Gentle Monster and Warby Parker, the glasses will compete with Meta’s Ray-Ban offering. No release date or price was announced.
The broader AI push builds on Google's "AI overviews," which now appear atop search results and are used by 1.5 billion people monthly. These summaries have shifted user behavior, leading to longer, more complex queries—but also to a 30% decline in clicks to external websites, per BrightEdge.
Elon Musk, who's suing Microsoft, is also software giant's special guest in new Grok AI partnership
Despite concerns about misinformation and reduced web traffic, Google is moving forward amid rising competition from AI-powered alternatives like ChatGPT and Perplexity. The company insists this shift strengthens its dominance, with Google still receiving 136 billion monthly visits—far ahead of AI rivals.
Upcoming experiments will include AI-assisted booking through Project Mariner, video-based search, and deep topic exploration. A new $250/month "Ultra" subscription tier will offer premium AI tools and 30TB of storage, expanding beyond the current $20/month "Pro" plan.
Even Google's own AI admits that AI Mode is likely to increase its influence over online information access—while warning publishers of the potential impact on web traffic.
Source: With inputs from agency
3 days ago
Elon Musk, who's suing Microsoft, is also software giant's special guest in new Grok AI partnership
Elon Musk is in a legal fight with Microsoft but made a friendly virtual appearance at the software giant's annual technology showcase to reveal that his Grok artificial intelligence chatbot will now be hosted on Microsoft's data centers.
“It’s fantastic to have you at our developer conference,” Microsoft CEO Satya Nadella said to Musk in a pre-recorded video conversation broadcast Monday at Microsoft's Build conference in Seattle.
Musk last year sued Microsoft and its close business partner OpenAI in a dispute over Musk's foundational contributions to OpenAI, which Musk helped start. Musk now runs his own AI company, xAI, maker of Grok, a competitor to OpenAI's ChatGPT.
OpenAI CEO Sam Altman also spoke with Nadella via live video call earlier at Monday's conference.
Musk's deal means that the latest versions of xAI's Grok models will be hosted on Microsoft's Azure cloud computing platform, alongside competing models from OpenAI and other companies, including Facebook parent Meta Platforms, Europe-based AI startups Mistral and Black Forest Labs and Chinese company DeepSeek.
Microsoft confirms supplying AI to Israeli military, denies use in Gaza attacks
The Grok partnership comes just days after xAI had to fix the chatbot to stop it from repeatedly bringing up South African racial politics and the subject of “white genocide” in public interactions with users of Musk's social media platform X. The company blamed an employee's “unauthorized modification” for the unsolicited commentary, which mirrored South Africa-born Musk's own focus on the topic.
Musk didn't address last week's controversy in his chat with Nadella but described honesty as the “best policy” for AI safety.
“We have and will make mistakes, but we aspire to correct them very quickly,” Musk said.
Nadella was interrupted by protest over Gaza
Monday's Build conference also became the latest Microsoft event to be interrupted by a protest over the company's work with the Israeli government. Microsoft has previously fired employees who protested company events, including its 50th anniversary party in April.
“Satya, how about you show how Microsoft is killing Palestinians?" a protesting employee shouted in the first minutes of Nadella's introductory talk Monday. "How about you show how Israeli war crimes are powered by Azure?”
Nadella continued his presentation as the protesters were escorted out. Microsoft acknowledged last week that it provided AI services to the Israeli military for the war in Gaza but said it has found no evidence to date that its Azure platform and AI technologies were used to target or harm people in Gaza.
Microsoft to lay off about 3% of workforce
Microsoft didn't immediately return an emailed request for comment about the protest Monday.
Microsoft introduces new AI coding agent
Microsoft-owned GitHub also used the Seattle gathering to introduce a new AI coding “agent” to help programmers build new software.
The company already offers a Copilot coding assistant but the promise of so-called AI agents is that they can do more work on their own on a user's behalf. The updated tool is supposed to work best on tasks of “low-to-medium complexity” in codebases that are already well-tested, handling “boring tasks” while people “focus on the interesting work,” according to Microsoft's announcement.
The new tool arrives just a week after Microsoft began laying off hundreds of its own software engineers in Washington's Puget Sound region as part of global cuts of nearly 3% of its total workforce, amounting to about 6,000 workers.
4 days ago
Starlink begins operations in Bangladesh; lowest package costs Tk 4200
Starlink, a Non-Geostationary Orbit (NGSO) satellite internet provider owned by Elon Musk’s SpaceX, has officially started its operations in Bangladesh.
The announcement was made on Tuesday (20 May) in a Facebook post by Faiz Ahmad Taiyeb, Special Assistant to the Chief Adviser.
“On Monday (19 May) afternoon, they informed me over a phone call and confirmed the matter this morning on their X handle,” he said.
“Initially, Starlink is launching with two packages – Starlink Residence and Residence Lite. The monthly cost is Tk 6,000 for one and Tk 4,200 for the other. A one-time cost of Tk 47,000 will be required for setup equipment,” he added.
Faiz Ahmad also said there will be no speed or data limits. Individuals will be able to use unlimited data at speeds of up to 300 Mbps. Customers in Bangladesh can start placing orders from today.
“With this, Sir’s (CA's) expectation of launching within 90 days has been fulfilled,” he said.
He went on to say, “Although expensive, this creates a sustainable alternative for premium customers to access high-quality and high-speed internet services.”
“In addition, companies will have opportunities to expand their business into areas where fibre or high-speed internet services are yet to reach. NGOs, freelancers, and entrepreneurs will benefit from uninterrupted, high-speed internet throughout the year,” said the Special Assistant to the Chief Adviser.
BTRC issues licenses to Starlink to operate in Bangladesh
The Chief Adviser on Tuesday congratulated all involved as Starlink officially started its operations in Bangladesh, says his Deputy Press Secretary Abul Kalam Azad Majumder.
Prof Yunus on February 14 held an extensive video discussion with Elon Musk, the founder of SpaceX, and owner of Tesla and X, to explore future collaboration and to make further progress to introduce Starlink satellite internet service in Bangladesh.
On April 28, Chief Adviser Prof Yunus officially approved the license for Starlink to begin operations in Bangladesh, marking a significant step towards improving connectivity, especially in remote and underserved areas.
On April 7, Starlink applied to the BTRC for a licence to operate in the country under the regulatory framework titled ‘Guidelines for Non-Geostationary Orbit (NGSO) Satellite Services Operators in Bangladesh’.
CA Prof Yunus approves Starlink’s license to operate in Bangladesh
On March 25, Prof Yunus directed the relevant authorities to ensure the commercial launch of Starlink’s satellite broadband internet service in Bangladesh within 90 days.
In accordance with this guideline, Starlink submitted a formal application along with the applicable fees and required documents.
A decision to issue the license was taken in principle during a meeting of the commission on April 21.
Starlink becomes a new addition to Bangladesh’s internet landscape, marking the country as the second in South Asia—after Sri Lanka—to host services from the global satellite internet provider.
4 days ago
House Republicans include a 10-year ban on US states regulating AI in ‘big, beautiful’ bill
House Republicans caught the tech industry by surprise and sparked backlash from state governments after inserting a provision into their flagship tax bill that would block state and local regulation of artificial intelligence for the next ten years.
This short but impactful addition, included in the House Energy and Commerce Committee’s broad legislative package, represents a significant victory for the AI industry. Tech companies have long pushed for consistent, minimal regulation as they advance technologies they claim will revolutionize society.
Despite its potential scope, the measure faces an uncertain future in the Senate, where procedural constraints may prevent its inclusion in the final version of the GOP bill.
“I’m not sure it’ll survive the Byrd Rule,” said Senator John Cornyn, R-Texas, referring to the requirement that all elements of a budget reconciliation bill must primarily relate to fiscal matters rather than broader policy initiatives.
“That sounds to me like a policy change. I’m not going to speculate what the parliamentarian is going to do but I think it is unlikely to make it,” Cornyn said.
Senators in both parties have expressed an interest in artificial intelligence and believe that Congress should take the lead in regulating the technology. But while lawmakers have introduced scores of bills, including some bipartisan efforts, that would impact artificial intelligence, few have seen any meaningful advancement in the deeply divided Congress.
RobinRafan named Best Content Creator of 2025 at Kidlon-Powered 4th BIFA Awards
An exception is a bipartisan bill expected to be signed into law by President Donald Trump next week that would enact stricter penalties on the distribution of intimate “revenge porn” images, both real and AI-generated, without a person’s consent.
“AI doesn’t understand state borders, so it is extraordinarily important for the federal government to be the one that sets interstate commerce. It’s in our Constitution. You can’t have a patchwork of 50 states,” said Sen. Bernie Moreno, an Ohio Republican. But Moreno said he was unsure if the House’s proposed ban could make it through Senate procedure.
The AI provision in the bill states that “no state or political subdivision may enforce any law or regulation regulating artificial intelligence models, artificial intelligence systems, or automated decision systems.” The language could bar regulations on systems ranging from popular commercial models like ChatGPT to those that help make decisions about who gets hired or finds housing.
State regulations on AI’s usage in business, research, public utilities, educational settings and government would be banned.
The congressional pushback against state-led AI regulation is part of a broader move led by the Trump administration to do away with policies and business approaches that have sought to limit AI’s harms and pervasive bias.
Half of all U.S. states so far have enacted legislation regulating AI deepfakes in political campaigns, according to a tracker from the watchdog organization Public Citizen.
Most of those laws were passed within the last year, as incidents in democratic elections around the globe in 2024 highlighted the threat of lifelike AI audio clips, videos and images to deceive voters.
California state Sen. Scott Wiener called the Republican proposal “truly gross” in a social media post. Wiener, a San Francisco Democrat, authored landmark legislation last year that would have created first-in-the-nation safety measures for advanced artificial intelligence models. The bill was vetoed by California Gov. Gavin Newsom, a fellow San Francisco Democrat.
“Congress is incapable of meaningful AI regulation to protect the public. It is, however, quite capable of failing to act while also banning states from acting,” Wiener wrote.
A bipartisan group of dozens of state attorneys general also sent a letter to Congress on Friday opposing the bill.
“AI brings real promise, but also real danger, and South Carolina has been doing the hard work to protect our citizens,” said South Carolina Attorney General Alan Wilson, a Republican, in a statement. “Now, instead of stepping up with real solutions, Congress wants to tie our hands and push a one-size-fits-all mandate from Washington without a clear direction. That’s not leadership, that’s federal overreach.”
As the debate unfolds, AI industry leaders are pressing ahead on research while competing with rivals to develop the best — and most widely used —AI systems. They have pushed federal lawmakers for uniform and unintrusive rules on the technology, saying they need to move quickly on the latest models to compete with Chinese firms.
Sam Altman, the CEO of ChatGPT maker OpenAI, testified in a Senate hearing last week that a “patchwork” of AI regulations “would be quite burdensome and significantly impair our ability to do what we need to do.”
“One federal framework, that is light touch, that we can understand and that lets us move with the speed that this moment calls for seems important and fine,” Altman told Sen. Cynthia Lummis, a Wyoming Republican.
And Sen. Ted Cruz floated the idea of a 10-year “learning period” for AI at the same hearing, which included three other tech company executives.
“Would you support a 10-year learning period on states issuing comprehensive AI regulation, or some form of federal preemption to create an even playing field for AI developers and employers?” asked the Texas Republican.
Altman responded that he was “not sure what a 10-year learning period means, but I think having one federal approach focused on light touch and an even playing field sounds great to me.”
6 days ago
Space Force, governors at odds over plans to pull talent from National Guard units
The leader of the U.S. Space Force is pressing forward with a proposal to transfer personnel from Air National Guard units to support the growing needs of the relatively new military branch. However, several state governors are pushing back, arguing that the move undermines their authority over their respective National Guard forces.
The plan would impact a total of just 578 service members across six states and the Air National Guard headquarters. Rather than establishing a separate Space Force National Guard — which officials say would be inefficient due to its small size — the initiative aims to integrate these personnel directly into the Space Force structure.
“We’re actively evaluating where we want to place our part-time workforce and the roles they’ll fill,” said Gen. Chance Saltzman, Chief of Space Operations, during remarks at a POLITICO conference on Thursday.
The Space Force was established by President Donald Trump in late 2019, during his first term. In the years since, the Air Force has transferred its space missions into the now five-year-old military branch — except for the 578 positions still contained in the Air National Guard, which is part of the Air Force. In the 2025 defense bill, Congress mandated that those positions move over to the Space Force as well.
Cyber crooks stole customer information, demanded $20 million ransom payment: Coinbase
The transferred service members would be a part-time force like they are now, just serving under the Space Force instead of their state units.
But space missions are some of the most lucrative across the military and private sector and the states that lose space mission service member billets are potentially losing highly valuable part-time workforce members if they have to move away to transfer in to the Space Force.
Last month, the National Governors Association said the transfers violate their right to retain control over their state units.
“We urge that any transfers cease immediately and that there be direct and open engagement with governors,” the Association said in April. The group was not immediately available to comment on Space Force’s plan.
“There’s a lot of concern in the National Guard about these individuals who are highly skilled that want to be in the Guard being transferred out,” Oklahoma Republican Sen. Markwayne Mullin said at an Air Force manpower hearing this week.
The contention between the states and the Space Force has meant the service hasn’t so far been able to approach individual members about transferring in.
According to the legislation, each National Guard will get the option to either stay with their units — and get re-trained in another specialty — or join the Space Force. Even if they do transfer into the Space Force, their positions would remain located in those same states for at least the next 10 years, according to the 2025 legislation.
The affected personnel include 33 from Alaska, 126 from California, 119 from Colorado, 75 from Florida, 130 from Hawaii, 69 from Ohio and 26 from Air National Guard headquarters.
7 days ago
Microsoft confirms supplying AI to Israeli military, denies use in Gaza attacks
WASHINGTON (AP) — Microsoft has confirmed providing advanced artificial intelligence and cloud services to the Israeli military during the Gaza conflict, including support for efforts to locate and rescue hostages. However, the tech giant insists there is no evidence its technologies were used to harm civilians in Gaza.
In a blog post published Thursday, Microsoft acknowledged its involvement in the war following the October 7, 2023, Hamas attack that killed around 1,200 Israelis and triggered a war in Gaza, where tens of thousands have since died. This marks the company’s first public statement on its support to Israel’s military.
The announcement comes months after an Associated Press investigation revealed Microsoft's previously undisclosed ties with Israel’s Ministry of Defense, showing a sharp increase in the military’s use of Microsoft’s AI services after the October assault. The military reportedly used Microsoft’s Azure platform to process surveillance data, which could be integrated with its AI-driven targeting systems.
While Microsoft said its support included cloud services, translation tools, and cyber defense assistance, it also stressed that the help was limited, selectively approved, and aimed at saving hostages. The company claimed it had not found evidence that its technologies were used to intentionally target civilians or violate its ethical use policies.
Prompted by employee protests and media scrutiny, Microsoft launched an internal review and hired an external firm for further investigation. However, the company has not disclosed the name of the external firm, the full report, or whether Israeli officials were consulted during the process.
The blog post also noted Microsoft lacks visibility into how its products are used once deployed on customer servers or third-party platforms, limiting its ability to fully track their usage in war zones.
Israel’s military also has cloud and AI contracts with other U.S. tech giants including Google, Amazon, and Palantir. Like its competitors, Microsoft said it enforces usage restrictions through its Acceptable Use Policy and AI Code of Conduct, asserting that no violations had been identified.
Experts say Microsoft’s statement sets a notable precedent in corporate responsibility. Emelia Probasco of Georgetown University remarked it is rare for a tech company to impose ethical usage terms on a government engaged in active conflict.
Nevertheless, critics remain skeptical. “No Azure for Apartheid,” a group of Microsoft employees and alumni, accused the company of attempting to polish its image rather than address real accountability concerns. Former employee Hossam Nasr, who was fired after organizing a vigil for Palestinians, criticized the company for not releasing the full investigation report.
Cindy Cohn of the Electronic Frontier Foundation welcomed Microsoft’s partial transparency but emphasized that many questions remain unanswered—especially regarding how Israeli forces are using Microsoft tools in military operations that have led to high civilian casualties.
Israeli raids, such as one in Rafah in February and another in Nuseirat in June, have rescued hostages but resulted in hundreds of Palestinian deaths, fueling continued debate over the ethical implications of AI in modern warfare.
7 days ago