DSS
Decision Sciences & Systems
Technical University of Munich
 
71071 Heidekrüger 1 CD SW

Stefan Heidekrüger

Department of Informatics (I18)
Technical University of Munich
 
E-Mail: stefan heidekrueger   tum  de
          .                    @       .
Office:
Room 01.10.056
Boltzmannstr. 3
85748 München, Germany 
Phone: +49 (0) 89 289 - 17530
Hours: by arrangement
 

 

I'm a PhD student at the DSS chair supervised by Prof. Bichler. My research focusses on computation of equilibria in incomplete information games, especially markets and auctions and using multi-agent reinforcement learning methods.

 

Short Bio

  • Since 2018         Research Associate, Decision Sciences & Systems, TUM
  • 2016 - 2018        Data Scientist, Business Analytics and Artificial Intelligence, Telefónica Germany
  • 2014 - 2016        M.Sc. Mathematics in Operations Research, Technische Universität München
  • 2014                   Erasmus+ student at KTH Royal Institute of Technology (Stockholm, Sweden)
  • 2013 - 2016        internships at a.hartrodt (2013) and zeb.rolfes.schierenbeck.associates (2015)
                               working student positions at a.hartrodt (2013-14), Telefónica Germany (2016), and SAP (2016)
                               student research assistant positions at TUM (2014, 2015) and HelmholtzZentrum München (2015-16)
  • 2012 - 2013        Visiting Student at The Hong Kong University of Science and Technology 
  • 2010 - 2014        B.Sc. Mathematics, TUM

  

Publications 

Working Papers

Learning to Bid: Computing Bayesian Nash Equilibrium Strategies in Auctions via Neural Pseudo-gradient Ascent
Bichler, M.; Fichtl, M.; Heidekrüger, S.; Kohring, N.; and Sutterer, P
Working Paper, 2021.
A recent version was presented at the 2020 annual meeting of the NBER Working Group on Market Design: Link

Workshop Papers

Equilibrium Learning in Combinatorial Auctions: Computing Approximate Bayesian Nash Equilibria via Pseudogradient Dynamics 
S. Heidekrüger, P. Sutterer, N. Kohring, M. Fichtl, and M. Bichler
presented at the 2020 Workshop on Information Technology and Systems (WITS20), and the 2021 AAAI Workshop on Reinforcement Learning in Games (AAAI-RLG-21) Link

Multiagent Learning for Equilibrium Computation in Auction Markets,
S. Heidekrüger, P. Sutterer, N. Kohring, and M. Bichler,
AAAI Spring Symposium on Challenges and Opportunities for Multi-Agent Reinforcement Learning (COMARL-21), March 2021 (forthcoming)

Computing approximate Bayes-Nash Equilibria through Neural Self-Play.
S. Heidekrüger, P. Sutterer, and M. Bichler
Workshop on Information Technology and Systems (WITS19), Munich, Germany, 2019.

 

Teaching

Courses

  • Business Analytics, Teaching Assistant (Winter Term 2018/19, 2019/20, 20/21)
  • Seminar on Data Mining, TA   (Summer Term 2019, 2020)
  • Seminar ITUB - "IT and Management Consulting", TA  (Winter Term 2019/20, 20/21)

Completed Student Projects


  • Daniel Schroter Reinforcement Learning in the MIT Beer Distribution Game, BSc Thesis, Informatics (2020)
    Markus Ewert      Efficient Query Strategies in Preference Elicitation via Deep Learning, MSc Thesis, Information Systems (2020)
    Anne Christopher  Fast Solvers for Batched Constrained Optimization Problems, MSc Thesis, Mathematics in Data Science (2020)
    Lukas Feye  Confidence-Moderated Policy Advice in Multi-Agent Reinforcement Learning, BSc Thesis, Information Systems (2020)
    Florian Ziesche Human Interpretable Machine Learning: A Machine Learning Approach for Risk Scoring, MSc Thesis, Mgmt & Technlogy (2019)
    Sebastian Rief Detection of anomalies in large-scale accounting data using unsupervised machine learning, MSc Thesis, Mgmt & Tech. (2019)         
    Kevin D. Falkenstein  Learning Equilibrium Strategies in Auctions via Deep Neural Networks, MSc Thesis, Information Systems (2018)  

 

 

 

 

 

 

 

 

 

 

 

 

Decision Sciences & Systems (DSS), Department of Informatics (I18), Technische Universität München, Boltzmannstr. 3, 85748 Garching, Germany
©2002-2021 DSS All Rights Reserved
Impressum, Privacy Policy, Copyright Information and Disclaimer

We use cookies on our website. Some of them are essential for the operation of the site, while others help us to improve this site and the user experience (tracking cookies). You can decide for yourself whether you want to allow cookies or not. Please note that if you reject them, you may not be able to use all the functionalities of the site.

Ok