Wrapping up and looking back
Adrien Foucart, PhD in biomedical engineering.
This website is guaranteed 100% human-written, ad-free and tracker-free. Follow updates using the RSS Feed or by following me on Mastodon
Adrien Foucart, PhD in biomedical engineering.
This website is guaranteed 100% human-written, ad-free and tracker-free. Follow updates using the RSS Feed or by following me on Mastodon
I have been very lucky through my academic career, but in the end not lucky enough. September 26th will be my last day as a post-doc researcher, and as I am in the process of wrapping up my research and cleaning out my desk, I would like to indulge in a tiny bit of self-absorbed rambling (it’s my blog, I’m allowed to do that). To make it a little bit easier on anyone who actually wants to read this, here is a table of contents (please don’t use an AI summary: read it or don’t!):
I joined the LISA at the Université Libre de Bruxelles in 2011, as a researcher on an academia/industry project. I worked for two years on vehicle detection and classification from traffic cameras, and failed to secure funding to start a PhD twice before giving up and moving to the private sector for two years. In 2015, a teaching assistant position opened up and I came back, this time working on digital pathology image analysis. I defended my PhD in 2022, and started, still in the same lab, on a postdoc, working on multimodal image registration in preclinical imaging. Finally, in early 2025, I applied for a tenured position on multimodal AI in medicine, and came in second in the selection process (which is a nice way of saying that I wasn’t selected). As this was the opportunity for me to finally get a permanent position, I quickly decided not to try for another round of postdoc funding. It’s time to go, more on that at the end of this post.
So what did I learn from the world of academia?
Most of the people doing most of the day-to-day scientific work in academia are on short term, precarious contracts where funding can quickly vanish. This means that most of the people who should be focused on their scientific work are in reality focusing a large part of their time and energy on securing the next round of funding.
This is not a very efficient use of our limited resources, to say the least, and it’s not getting better.
The precarity of academic jobs also means that researchers need to worry a lot about publishing, as often as possible, because that’s the metric that we’re apparently stuck with. Not publishing enough means fewer chances of converting these short-term jobs into a tenured position.
Publishing scientific output is great, and a necessary part of science, but when it becomes the goal rather than a means of communicating about our results, it becomes a very problematic incentive. There is a huge crisis of fake science going on right now (see Retraction Watch, or Elisabeth Bik’s Science Integrity Digest, or anything posted by Guillaume Cabanac for examples…), which started even before generative AI became ubiquitous. A big factor in how much bullshit gets through the peer review process is just the insane amount of articles that are being pushed through the process. Peer reviewers take less time per articles, and are themselves sometimes incentivized to go through articles quickly in order to gain publication credits.
If I had to start over and choose my publication strategy right now, I would no longer look at impact factors, and I would stay away from the Big Publishers (Elsevier, Springer/Nature…). Instead, I would look at true Diamond Access journals, making sure to always also make supporting code and data available.
Which leads me to…
Most of the research in my domain is supported by software. Most of our experimental work is, generally, applying some software to some data. That is, in theory, great for replicability… as long as the code and data are made available to the rest of the research community.
I have become more radicalized on this as time goes on, and as I’ve seen how often published results are found to be wrong when it’s possible to check them. In fact, a not insignificant part of my own research was based off finding other people’s mistakes. The truth is, researchers (even in AI-related fields) are not necessarily good software developers. There is a reason why Greg Wilson et al.’s Best practices for scientific computing had to be followed up by Good enough practices for scientific computing a few years later: everyone’s research depends on some software development, and most scientists are not developers.
The only way for any scientific result to be useful for the rest of the community (and not just to add a DOI to a researcher’s belt) is if the supporting software and data are open source, so that mistakes can be fixed, and misrepresentations of the software in the text can be commented on.
It’s of course not always possible to publish all the data (particularly with medical data), but there should always be enough to be able to check that the code is doing what it’s supposed to be doing.
The best, most constructive interactions I’ve had with colleagues over my research (or over their research) has always been at small scale events. Small, local conferences. Informal meetings in the lab. Talking over lunch while visiting another lab.
Huge conferences are great on the resume, but for me at least they just leave me exhausted from too much noise to sift through.
Did you know that ICCV has recently limited each author “to a maximum of 25 paper submissions”? This is insane. The idea that a person can significantly contribute to 25 papers at the same conference is absolutely ridiculous. At the SIPAIM 2023 conference, I was first author of one paper and second author of another, and I really don’t think I could have contributed anything valuable to another one at the same time, even though these were small “work-in-progress” paper for a small (but very good!) conference.
The limited resources of research labs are in my opinion better used going to or organizing smaller scale events rather than trying to go for the prestigious conferences.
I don’t think ChatGPT is a very good program.
I think the GPT models are very good LLMs, and that LLMs, Vision Transformers, and other large modern architectures are really interesting and can do great things in many different fields. But chatbots are one of the worse way of using them.
Unfortunately, many researchers are now using ChatGPT (or Claude, etc., but mostly ChatGPT) to write their code, their research papers, and to summarize the existing literature. The problem is that chatbots are notoriously bad at summarizing things, at writing things that actually make sense, and at writing code that does what it’s supposed to do.
It also make people worse at critical thinking, and limit even more than before the opportunity for lesser known labs and researchers to get “on the radar” of their community. With ChatGPT, the state-of-the-art will be incomplete, the experimental results are likely to be irrelevant, and the write-up will be filled with bullshit and plagiarism.
So maybe we shouldn’t do that?
Researchers are not the only one falling for the AI hype. Students and teachers are, in fact, one of the population most aggressively pursued by AI salesmen. The accepted knowledge today seems to be that “AI is here to stay”, that teachers need to adapt to it, and that we can’t expect students not to use it. Most master theses if not all are now written “with the help” of a chatbot. And why wouldn’t students use it? It’s right there, available in institutionally provided software such as Microsoft’s suite. We are warning students against it from one side, and providing it to them from the other.
But is it really a bad thing? If generative AI-powered chatbots are the new reality, then surely students should learn how to use them. Maybe, but they should also learn when to use them, when not to use them, and therefore how not to use them.
Let’s take two hypothetical scenarios for the future of “AI”.
Scenario one is the tech optimist’s scenario, where AI is on a path of exponential growth and will soon be able to do every intellectual job that currently requires a human. If AI gets to this point, then what role could our current students possibly have in this new world? A truly intelligent AI will not need “prompt engineering”. Prompt engineering is the art of coaxing a dumb AI into producing somewhat acceptable results. Truly intelligent AIs are incompatible with our current model of society: there is no place there for (most) human jobs. Learning “how to use AI” will not be helpful then. Owning the AI is the key, and those positions are already filled.
Fortunately, I don’t think scenario one is very likely, at least in the next decades. GPT-5 was such a flop that Sam Altman quickly had to restore version 4 and to start hyping version 6. It’s been clear since version 3.5 that a plateau has been reached. We now need to put exponentially more power into it for very diminishing returns. The “reasoning” models take minutes to process queries, with results that are still plagued with all the fundamental problem of LLM-based generative AIs: no actual understanding of what they are doing.
So scenario two is that, at some point in the not-too-distant future, the AI hype bubble explodes. That is going to have a nasty effect on the tech industry. I wouldn’t be surprised if Microsoft doesn’t survive the crash, as they have gone all-in into generative AI. It’s also going to mean that, with the departure of investor money, any remaining AI company will need to push the cost of training and running those models to the customer. And that’s going to turn AI chatbots very expensive toys.
Does that mean that generative AI will be going away? Probably not. But adding friction to the system (such as having to pay for each interaction) may make people a little bit more willing to spend a couple of minutes writing a damn email by themselves, reading the material that’s given to them, and thinking about the exercises that they have to solve.
Because right now, many students are not learning. They have become machines that put instructions into a chatbot and copy-paste the results. Of course, for many exercises, it works well: exercises designed for students are rarely so original that you can’t find something very similar online, and most instructors are willing to accept somewhat flawed answers as long as it seems like the student learned something. Then, at some point, the student hits a bump on the road and faces a problem that the chatbot cannot solve. And now, they can’t think the problem through, because they haven’t developed any of the necessary skills.
I am very happy to have gone through university before using AI was an option. I would absolutely have fallen for it at the time. I certainly used every means available to make passing the exams easier, even if it meant not interacting with the subject matter fully. Hopefully we can soon find a way to work around AI, and to convince students to learn how to use their own brains.
Friday September 26th, 2025, will be my last day as a postdoc researcher at the Université Libre de Bruxelles.
Monday September 29th, 2025, will be my first day as a teacher at the Haute École Léonard de Vinci, where I’ll join the teaching team for their Bachelor in Informatics.
I am very excited about it. I love teaching, and I particularly love teaching about coding, computer science, and AI. I will have the opportunity to do all of that there.
That means also that this blog is going to be wrapped up. There won’t be much research going on. I may post a little bit more about some ongoing project that I’m finishing, but for the most part my time as a researcher is done. My time as a “blogger” isn’t, though. I intend to keep my irregular posting schedule on my personal blog 2xRien, and I will be posting (in French and probably just as irregularly) about my teaching experience on my brand new blog, clavier ouvert. The best way to get notified when I post anything, if you’re interested in following me, is through the RSS feeds available in each of my blogs. I’m posting less and less of my stuff on social networks. I think choosing to subscribe to RSS feeds if a much better way of curating your own feed full of interesting things to read, without Zuckerberg et al. chiming in with what they think you should read.
This blog will stay right here, and the archives will remain accessible for the time being. I like the idea of keeping a trace of the things I communicated about through these past five years. But now, on to new challenges!