Latest ideas

An Inside Look at the Infrastructure of this Website

Building and maintaining a personal website to me goes far beyond just publishing content. It also involves the tinkering with a multitude of tools and tech, designing an infrastructure that makes publishing content easy and efficient. In this post, let’s delve into the details of the tech stack underpinning this blog. As you can imagine this is the first, more technical piece of writing on this page, so skip if that’s not for you.

Static Sites and the 11TEA Stack 🍵

Static sites are web pages with fixed content coded in HTML, CSS, and occasionally JavaScript, which are delivered to the user exactly as stored by the server. These sites display the same content to every visitor and do not require a database or backend server code. This type of website excels in delivering speed, reliability, and security. For blogs and personal websites, the content often doesn’t change rapidly or dynamically, making static sites a perfect fit.

Eleventy is the static site generator I use for this website. Eleventy prides itself on its top-notch performance. What I also like about it is that it doesn’t tie you to any client-side JavaScript frameworks. It also supports several template languages.

Eleventy is the core of what is called the ElevenTEA stack. Eleventy is used to generate the static websites. The ‘T’ stands for TailwindCSS. Tailwind is a CSS framework that consists of a set of pre-defined utility classes that can be composed to build any website design, directly in your HTML files. For example, we can use several utility classes on a button:

<button class="bg-slate-900 hover:bg-slate-700 focus:outline-none focus:ring-2 focus:ring-slate-400 focus:ring-offset-2 focus:ring-offset-slate-50 text-white font-semibold h-12 px-6 rounded-lg w-full flex items-center justify-center sm:w-auto dark:bg-sky-500 dark:highlight-white/20 dark:hover:bg-sky-400">
Launch Website
</button>

And the button will look like this:

While this long list of css classes may look confusing at first, but after some time of working with Tailwind this becomes so convenient.

Lastly, the ‘A’ in the ElevenTEA stack stands for AlpineJS. What Tailwind is for CSS, Alpine is for Javascript. My personal website doesn’t use a lot of Javascript. That’s why I wouldn’t want to use a huge frontend framework. AlpineJS is minimal and has the same useful characteristic that you can write Javascript behaviour directly into your HTML markup.

For example, we can build a simple counter like this:

<div x-data="{ count: 0 }">
<button x-on:click="count++">Increment</button>
<span x-text="count"></span>
</div>

See all these ‘x-’ attributes in the HTML markup? Those are the Alpine directives. Our counter will look like this:

Again, it may be wierd at first not having your Javascript in a separate file or inside a <script> tag, but after having worked with it for a while I love having markup, style and behaviour all in one HTML file.

Hosting via GitHub and Netlify

All the code that drives this website is entirely open source on GitHub at julianprester/website. Go check it out and feel free to copy things for your own website if you like the idea of a simple, performant, free and low maintenance personal website. Because static sites are so efficient to host, there are a bunch of hosting providers out there that host your static sites for free. For example, I’m using Netlify to host this website. Netlify enables direct deployment via a GitHub repository, ensuring version control and efficient code management. That means whenever I push an update to GitHub, Netlify instantly rebuilds my website and deploys it. While I use only a fraction of its features, these hosting providers actually offer many more advanced perks like form handling, A/B testing, pull request previews, and more.

On top of that, since everything is kept on GitHub, I can use GitHub Actions to automate things. For example, as you might have seen, I’m sharing interesting links I find on the Internet on the right side of this page. Adding these links is super easy using GitHub actions. My workflow looks like this:

  1. I tend to consume short form content on my phone
  2. When I come across something I think is worth reading I save the link to my wallabag bookmarking service.
  3. I later read the post or article, and when I think it’s worth sharing, I share with my Gotify app, including a short comment
  4. A GitHub action which you can find here automatically checks my Gotify every 24 hours for new links, turns them to a markdown file, pushes them to GitHub, and voila, Netlify automatically deploys a refreshed webpage

This may sound quite complicated and you might not know about all the services that I referenced here, but once set up this is an extremely efficient workflow for me. Have a look at this services. I really use them all the time. I might also write separate posts about them and how I use them in the future.

RSS for Content Distribution

Wondered what all these radio icons on this website are about? RSS, or Really Simple Syndication, is a web feed that allows users to access updates to online content in a standardized, computer-readable format. It’s an older technology standard from the early internet days that often goes unnoticed in today’s social-media-dominated landscape. But that’s exactly why I think it is important to keep it alive. In fact, even though few people are aware of it, RSS is very much alive in the podcasting world. Most podcasts still notify their listeners about new episodes via RSS. Although it might seem like a remnant of the old internet, it holds significant utility for content syndication. I’m using RSS in the context of this website to syndicate new articles and updates directly to people’s feed readers. Eleventy creates the RSS feed for me automatically. I construct individual RSS feeds for various content types; be they general blog posts, book reviews, or links. I don’t know whether there is anyone that has subscribed to the feeds, but that’s kind of what makes it appealing to me. I might even implement more advanced content syndication strategies later on using the RSS feeds that could share my updates to social media.

So, there you go! That’s a quick tour behind the scenes of this website, revealing the design principles and technology behind its very existence. Hopefully, it provided an understanding of how this website evolved and how it functions behind the front-end you see in your browser.

Working in Public (as an Academic)

Alright! I’m just going to do it. I’m going to try out this working in public thing. This is the very first post of my journey along which I will be sharing ideas and things I learn about academia, technology, and other things. I’m excited that you’re here too.

The idea of “working in public” refers to openly sharing the process behind creating work, rather than just the finished product. Writers, artists, and other creatives often work in isolation, keeping their ideas and progress hidden until a final polished piece is ready for release. Similarly, as academics, we often focus purely on publishing final papers. But rarely do we provide a window into the winding process that produced those polished products.

Here I will explore “working in public” - sharing not just outputs, but the ideas, struggles, and wins throughout my academic journey, inviting you along for the ride. My goal is to document my experiences, unearth new insights through writing, and engage with you, the readers, to refine my thinking. This purpose goes beyond using a website for personal connection or professional branding. Thinking out loud socially, not alone, should improve the quality of my ideas. Explaining and maybe even getting feedback forces me to confront gaps in my understanding.

Overall, I’m aiming to create an online intellectual home, an idea greenhouse, optimised for thinking and creativity. That means carefully designing the space, tools, habits, and community that bring out interesting work (I’m sure there will be space in the future to write more about some if not all of these). This will most likely be an iterative process over many years, but I’m hoping that the payoff of enhanced abilities will make it worthwhile.

In future posts, you can expect:

  • Notes on research papers primarily about social aspects of technology from disciplines such as Information Systems and Organisation Studies.
  • Reviews of books I’m reading. I’ve actually been publishing these here already for a while.
  • Ideas that stand out on their own and that may eventually feed into research papers.
  • Technical posts about software development as an academic and what tools I find useful.
  • Curated content and links from elsewhere on the internet.

Ultimately, this blog represents a public idea repository - one that I believe will enhance my creativity and research. Putting fledgling thoughts out into the world accelerates their refinement. And sharing these ideas in public will uncover insights I’d likely never find in isolation.

Found elsewhere

It’s the End of the Web as We Know It First principle thinking on the web and its publishing model. There is much more at stake than hallucinated references. #

Generative A.I. Arrives in the Gene Editing World of CRISPR Gen AI is now entering the gene editing space. If transformers are good at predicting sequences of words, why wouldn’t they be able to predict DNA sequences? #

AI Regulation is Unsafe We cannot rely on governments to regulate AI. All of AI’s risks lie outside goverments’ boundaries and election terms. #

On the Conversational Persuasiveness of Large Language Models: A Randomized Controlled Trial LLMs seem to have already gained super-human capabilities when it comes to persuasion. In this RCT LLMs were better at changing someone’s opinion than humans, given the same information. #

Return to Office Mandates Don’t Improve Employee or Company Performance Strong empirical evidence that return-to-office mandates don’t improve firm performance and hurt employee satisfaction. #

Remote work statistics and trends in 2024 USAToday survey says that 14% of US employees work entirely from home. This number will increase to 20% in 2025. #

After AI beat them, professional go players got better and more creative Go is one of the AI use cases, where super-human ability has already been achieved. Go players aren’t contemplating though; they’re levelling up their skills and creativity. Encouraging findings for the future of work. #

The jobs being replaced by AI - an analysis of 5M freelancing jobs - bloomberry Online labour markets could be a bellweather of what’s to come for the future of work with AI. So far only few professions are actually being impacted by LLMs. #

Generative UI and Outcome-Oriented Design Mass-personalization may finally become a reality with generative AI. #

Automated Unit Test Improvement using Large Language Models at Meta Software development is THE use case for LLMs. Meta tried improving their software tests with LLMs. 73% of code recommendations were pushed to production. #

More Agents Is All You Need Agentic workflows is clearly the next frontier in large language models. #

Why isn’t Preprint Review Being Adopted? Why do preprints work but preprint reviewing doesn’t? Preprints don’t compete with journals but preprint reviewing does. #

Large language models can do jaw-dropping things. But nobody knows exactly why. The state of AI research today is comparable to physics in the 20th century. Lots of experiments with surprising results to which we don’t have an answer. #

How ML Model Data Poisoning Works in 5 Minutes Data poisoning attacks on LLMs are particularly dangerous because once poisoned it’s almost impossible to fix models. #

How Quickly Do Large Language Models Learn Unexpected Skills? | Quanta Magazine LLMs capabilities may not be as ‘emergent’ as we thought. What emerges may be what we measure. #

OpenAI's chatbot store is filling up with spam | TechCrunch Generative Al has been celebrated for democratizing software development as OpenAl allows anyone to publish their GPTs. Now were starting to see the downside of democratization: a flood of low quality chat bots with all kinds of copyright issues. #

Of course AI is extractive, everything is lately • Cory Dransfeldt An interesting take on the extractive nature of basically all technological developments of the last decades. #

Algorithmic progress in language models Compute has obviously been a major contributor to the progress of large language models. This analysis shows that beyond hardware, algorithms have developed at an even faster pace. #

Stealing Part of a Production Language Model AI security becoming a more and more interesting field. Closed models may no longer stay closed after all. #

The Evolution of Work from Home Work from home has now stabilised around 30% in the US. That’s almost four times pre-pandemic levels. #

Research: The Growing Inequality of Who Gets to Work from Home The future of remote work is unevenly distributed. It will be interesting to see the effects on team culture within and between remote and non-remote? #

On the Societal Impact of Open Foundation Models A much needed balanced analysis of open foundation models. Models need to be assessed on a ‘marginal risk’ basis rather than taking their total impact. #

CEOs Are Using Return To Office Mandates To Mask Poor Management With return-to-office mandates are we entering a death spiral of poor performance? #

Why I use Firefox Firefox’s Android extension support is a plus for the everyday user; the Gecko engine for anyone with an interest in a diverse web. #

These companies tried a 4-day workweek. More than a year in, they still love it It seems the only reason why 4-day workweek trials are unsuccessful is customer-centric rather than workforce-centric issues. #

The 9-month-old AI Startup Challenging Silicon Valleys Giants It’s good to see that AI development efforts are not just concentrated on scaling, but also on efficiency and innovation. #

Big Post About Big Context The biggest productivity gains with AI will come from “cognitive NP” hard tasks. Tasks that can be easily verified or revised by humans but that are hard to complete. #

Sleeper Agents: Training Deceptive LLMs that Persist Through Safety Training LLMs come with an entirely new class of security issues. Backdoors are already hard to find in traditional software. They are totally undetectable in LLMs. #

Scammy AI-Generated Book Rewrites Are Flooding Amazon AI generated book summaries are flooding Amazon. In the future, it will be incredibly hard to find original content in a sea of derivative AI generated work. #

Hackers can infect network-connected wrenches to install ransomware Internet connected wrenches are a thing now. Imagine your wrench is being held hostage by a ransomware attack. #

My 3-Year Experiment on How to Be a Digital Nomad It’s quite interesting to see all the different pieces of technology coming together to enable a digital vanlife: electric vehicles, solar panels, starlink satellite internet, … #

Survey of 2,778 AI authors: six parts in pictures Perception of AI “progress” among researchers seems to be at an all-time high. Some predictions about when AI will replace humans in intelligent tasks revised by several decades. #

NIST Identifies Types of Cyberattacks That Manipulate Behavior of AI Systems NIST’s typology of cybersecurity attacks against AI systems seems to shift attention to training stage attacks rather than deployment stage prompt injection techniques. #

How is AI impacting science? Fascinating talk on the future role of AI in science. AI will provide predictions of scientific problems, it is our job to validate them and identify the higher-level principles or theories underlying them. #

Will We Be Addicted To Our Phones Forever? An Optimistic Outlook on the Future of Digital Wellbeing The belief that so called “digitally native” problems are easier to solve than others is based on deep techno-optimist assumptions. Are there any good examples where digitally native problems were successfully solved with more technology? #

Things are about to get a lot worse for Generative AI Law suits aside, AI companies should at the very least be open about their training data. Ultimately we will need a training provenance system that allows tracing of training data all the way to model weights. #

Too Much Information As our infrastructures improve we are reducing the friction of research. Is frictionless reproducibility really what drives research overproduction? #

The surprising connection between after-hours work and decreased productivity Time spent working after hours decreases productivity. Slack study shows it’s the quality of work time that matters, not quantity. #

The Great AI Weirding The quantification of AI researchers seems to be a microcosm of the metrified “games” to come for most knowledge workers. #

Preparedness Are we prepared yet? Although OpenAI’s Preparedness Framework leaves room for unknown-unknowns, I’m not convinced that the four identified risk categories are the worst that could happen to humanity with AGI. #

Humans as Cyborgs We have always already been cyberorganised. New technologies are simply affording a deeper entanglement of the human-technology relationship. #

AI companies would be required to disclose copyrighted training data under new bill Since copyright infringement cases against AI companies have mostly been dismissed, this law would at least offer transparency when copyrighted material is used for training. #

Using sequences of life-events to predict human lives - Nature Computational Science AI can now predict when someone will die with close to 80% accuracy. I wonder what insurance companies will do with such models. #

OpenAI Begins Tackling ChatGPT Data Leak Vulnerability · Embrace The Red It’s good to see that something is being done to address prompt injection and data exfiltration vulnerabilities. Not sure if a closed source validation function is the right approach though. #

American workers keep proving they don’t need to return to the office to be productive Workers are still not returning to the office; productivity is increasing anyways. #

AI and Mass Spying - Schneier on Security Until now mass-spying wasn’t feasible because of manual effort required. AI will make mass-spying not only possible, but make mass-surveillance look petty. #

Artificial intelligence systems found to excel at imitation, but not innovation Interesting experiment with unsurprising outcome. Current AI architecture is clearly missing something to be truly innovative. #

Can I remove my personal data from GenAI training datasets? It’s hard enough to find out whether my data has been used to train an AI model. It seems impossible to remove data from already trained models. May ‘machine unlearning’ offer a solution? #

Amazon exec says it’s time for RTO: ‘I don’t have data to back it up, but I know it’s better’ It’s a bit disappointing that so much of the remote work discussion focuses on ‘where to work’ rather than ‘how to work’. And even more strikingly a lot of the arguments against remote work seem to be based on inertia and personal beliefs. #

Reshaping the tree: rebuilding organizations for AI Organisations change very slowly. That’s why thinking about future AI models is more important than only reacting to today’s developments. By the time organisational processes are changed, new AI models are already out requiring further change. #

More than Half of Generative AI Adopters Use Unapproved Tools at Work Deviance seems to be an important topic for the future of work. AI is being adopted without formal policies. We are seeing the same with work-from-anywhere arrangements. #

"Make Real" There were a few demos of how ChatGPT can develop websites based on a sketch drawn on a tissue. “Make Real” is an amazing example of how anyone can now do web design and improve the product through iteration. #

The CEO of Dropbox has a 90/10 rule for remote work All-remote mixed with high intensity in-person events or workations seems to be a good recipe for remote organisations. #

ChatGPT use shows that the grant-application system is broken “A 2023 Nature survey of 1,600 researchers found that more than 25% use AI to help them write manuscripts…” If AI use among academics is really as widespread as this Nature survey suggests, shouldn’t we see a significant drop in journal submissions in light of publishers’ AI bans? #

OpenAI is too cheap to beat Even though LLMs seem to become a commodity, AI companies may still be able to develop a competitive advantage through their massive computing infrastructure. #

Dear journals: stop hoarding our papers Submitting papers to multiple journals at once. A simple idea that sounds so strange in the current publishing machine. Could it fix the machine? #

Navigating the Jagged Technological Frontier | Digital Data Design Institute at Harvard We have seen studies about performance gains in software development. This is one of the first studies in a different knowledge work profession. AI adoption is not a binary. Performance increase depends on the way AI is adopted. #

Return-to-office orders look like a way for elite, work-obsessed CEOs to grab power back from employees CEOs are trying to get control back. Return to the office mandates will affect workers unequally. #

Remote work is starting to hit office rents It took some time, but it seems that office rents are starting to reflect the shift to more remote work. Knowledge work heavy markets in San Francisco and Manhattan show the biggest declines. #

AI is most important tech advance in decades Generative AI seems to be widely believed to be a technology that will advance progress in a way that only few technologies do. Bill Gates considers it the most important technology since the graphical user interface, and even skeptics are starting to acknowledge its transformative potential. #

Was this written by a human or AI? Humans are terrible at identifying AI generated content; only slightly better than a coin flip and this is only going to get worse as AI models become better. Good news is that humans seem to be consistent in why they fail at detecting AI generated. So there may be cues that we can implement in AI models, so called AI accents. #

Brainstorm Questions Not Ideas Our entire life we are being trained to have answers. In an age of AI where answers always just one prompt away we may need to shift education towards teaching students how to ask good questions. #

The False Promise of ChatGPT Large language models have been prophesied to be a jumping board to general AI. Noam Chomsky argues that they are not. Large language models are incredibly strong at predicting language, but they are incapable of explaining language; a trait that is a necessary condition for intelligence. #