
Hitachi Energy Ireland Limited
Providing innovative energy solutions for a sustainable, electrified future.
Research Intern – Human‑Centered Transparency for GenAI Systems
Research intern developing human‑centered transparency solutions for generative AI systems.
Job Highlights
About the Role
The intern will conduct an in‑depth literature survey on AI interpretability, Explainable AI (XAI), and human‑AI interaction, emphasizing transparency in GenAI systems. They will investigate methodologies and frameworks that interpret AI model decisions across domains such as computing, coding, and software engineering. The role involves implementing ML/GenAI techniques to develop human‑centered transparency solutions, running experiments, building a codebase, and preparing a technical report, presentation, and possibly a peer‑reviewed conference paper. • Conduct an in‑depth literature survey on AI interpretability, XAI, and human‑AI interaction for GenAI systems. • Investigate frameworks that interpret AI model decisions across computing and software engineering domains. • Implement ML/GenAI methods to create human‑centered transparency solutions. • Run experiments, develop a codebase, and evaluate results. • Prepare a technical report, presentation, and potential peer‑reviewed conference paper.
Key Responsibilities
- ▸lit survey
- ▸framework study
- ▸ml implementation
- ▸experiments
- ▸codebase
- ▸report writing
What You Bring
Candidates should be PhD students or candidates, or senior master’s students (thesis‑based) in Computer Science, Machine Learning, Software or Electrical Engineering. A deep understanding of AI/ML methodologies—including model architectures, training, and evaluation—is required, as well as experience with Python and ML libraries (TensorFlow/PyTorch, SciPy, Scikit‑learn). Familiarity with GenAI implementation, prompting, evaluation, and agent/tool‑use patterns (e.g., LangChain, retrieval‑augmented generation) is essential. Prior research experience in problem definition, solution exploration, and result analysis, coupled with strong written and spoken communication skills, is also needed. Preferred qualifications include familiarity with XAI/interpretability libraries such as SHAP or LIME, experience with OpenAI, Azure, or open‑source LLMs, and at least one first‑author publication in a top AI/ML conference or journal. Candidates should demonstrate critical and innovative thinking, the ability to lead ideas, experience with multi‑agent frameworks, visualization techniques for interpretability, and handling large datasets. • Require PhD or master’s (thesis) students in Computer Science, ML, or Software/Electrical Engineering. • Must have strong knowledge of AI/ML model architectures, training, and evaluation. • Proficient in Python and ML libraries such as TensorFlow/PyTorch, SciPy, and Scikit‑learn. • Familiar with GenAI prompting, evaluation, and agent/tool‑use patterns (e.g., LangChain, retrieval‑augmented generation). • Preferred: experience with XAI/interpretability tools like SHAP or LIME. • Preferred: hands‑on work with OpenAI, Azure, or open‑source LLMs. • Preferred: first‑author publications in top AI/ML conferences or journals.
Requirements
- ▸phd
- ▸python
- ▸tensorflow
- ▸genai
- ▸shap
- ▸communication
Benefits
Hitachi Energy brings together world‑class talent to drive innovation for a better future, encouraging pioneering spirit, imaginative ideas, and fearless problem‑solving. Employees work on sustainability solutions, smart‑city infrastructure, and a wide range of exciting projects while enjoying a collaborative, creative environment that supports career growth and exploration of new horizons.
Work Environment
Hybrid