Human judgement, not algorithms alone, must lead 'smart' safety measures, say Intersec 2026 experts
With AI spending projected to exceed US$2 trillion in 2026, speakers at the 2026 Intersec Health and Safety Conference have emphasized that it is no longer confined to ‘deep tech’ but is now embedded in safety‑critical frontline roles.
Global spending on artificial intelligence (AI) is projected to exceed US$2 trillion in 2026 (Gartner, 2025), senior government and industry leaders at a high-level panel discussion at the 27th edition of Intersec discussed how AI is quickly transitioning from ‘deep tech’ to everyday frontline operations in safety-critical environments.
During a session entitled “AI and Safety 4.0: Rethinking Human Risk,” the panel discussed how AI is transforming safety practices, ethical decision-making, and risk management culture. They emphasized that intelligent tools should enhance rather than replace human responsibility and leadership. Panelists observed that AI is currently used to develop predictive analytics, identify risk patterns, and trigger early warnings in complex operations. When deployed correctly, this technology has significant potential to reduce incidents and enhance efficiency. However, Boards and executives were urged to implement human-centered AI safety strategies, establishing clear boundaries for AI decision-making.

Dr Waddah S. Ghanem Al Hashmi, Chairman, Federal Committee for OSH, emphasized that companies cannot outsource accountability to algorithms, drawing a comparison with human employees. He said: “We delegate responsibility, but we do not delegate accountability. In all cases, I delegated that responsibility, but I remain accountable. So, the employer, anybody who engages with AI or uses AI, continues to carry the accountability.” Speakers also pointed to real-world refinery cases where the introduction of AI-based predictive tools led to a drop in manual inspections, raising questions about over-reliance on technology and the possibility of creating new blind spots in safety systems.
Dr Islam Adra, Vice President, HSE MENA & SCO, DP World, explained why trust is a significant challenge when it comes to integrating AI into the workplace: “When you put 'trust' and 'AI' in the same sentence, it is kind of an oxymoron. There is a contradiction there, because AI has done a very good job building a very bad reputation. When we think of AI, especially in the workplace, we think of surveillance, we think of control, we think of monitoring, and even the replacement of workers.”
The panel, which was moderated by Isaac Ochulor, Senior Engineer and Digital Transformation Strategist at Suhail Almazroui Transportation, and also included Hari Kumar Polavarapu, Director at Emirates National Oil Company Limited (ENOC), urged boards and executives across the region to integrate AI into their safety strategy rather than treating it as a standalone technology project. The panel was part of the Intersec Health & Safety Conference 2026, powered by the Institution of Occupational Safety and Health, which brings together global thought leaders, innovators, and HSE professionals for an immersive two-day program focused on emerging technology, occupational health, and mental well-being, and critical challenges across road, construction, and industrial safety.
Commenting on the conference, Dishan Isaac, Show Director of Intersec at Messe Frankfurt Middle East, said: “Intersec has always been a platform for the region's safety community to unite to address future risks. This year, artificial intelligence (AI) is a prime topic of that agenda. Our objective is to assist governments and industries in leveraging this powerful technology to enhance safety, protect individuals, and foster trust across every sector.”













