kernelle

joined 2 years ago
MODERATOR OF
 

With the widely publicised open conflict between President Trump and some of the world’s most prestigious universities, academic freedom is no longer a niche topic for a small clique of self-proclaimed intellectuals. The front is quieter in Belgium, but it has not always been so, and the frontline has moved quite radically.

“American institutions of higher learning have in common the essential freedom to determine what is taught, how, and by whom. Our colleges and universities share a commitment to serve as centres of open inquiry where faculty, students and staff are free to exchange ideas and opinions without fear of censorship or deportation.”

So declared hundreds of American university presidents on April 22 in a public response to US President Donald Trump’s ferocious public attacks on Harvard University.

When reading the statement, I confess to experiencing a feeling of pride. Pride because so many universities united to defend academic freedom. Pride because Harvard, where I taught for several years, was taking the lead despite (and no doubt also because of) also its being the main focus of attacks: in May, Trump ordered a ban on foreign students at Harvard). Pride also because European universities seem remarkably preserved from such threats. Belgium, in particular, seems to be doing quite well.

Every year, an index of academic freedom is published by the University of Erlangen-Nürnberg. It aggregates several dimensions, such as institutional autonomy, the absence of interference in research and teaching and the freedom to communicate ideas and findings. In the 2025 edition, Belgium is ranked fifth out of 179 countries, preceded only by Czechia, Estonia, Jamaica and Sweden. This flattering place may well be deserved. However, at least judging by the history of my own university, Belgium’s performance in terms of academic freedom cannot have been that brilliant up to quite recently.

Academic freedom is not the same as freedom of expression, and it is legitimately subjected to tighter restrictions. Freedom of expression is the freedom to say and write whatever one wishes wherever one wishes. It is limited as regards both content and context - though the extent varies from one country to another. Typically, speech should not incite violence, defame particular individuals or advertise poisonous products. Blasphemous, racist, sexist, negationist and hate speech are also often prohibited by law. Moreover, the exercise of freedom of expression must respect the rules of public order. By transgressing such restrictions on the content or context of speech, one exposes oneself to penal sanctions. Whether one is an academic or not makes no difference.

Academic freedom, by contrast, is a privilege claimed by academics in the form of immunity from professional sanctions such as being fired, denied a promotion or reprimanded. It is the academics’ freedom to exercise their profession as they see fit, whether as teachers, researchers or public intellectuals. It covers what they say or write in their classes, their scientific publications and their public interventions — hence the connection with freedom of expression — but also what they do, for example how they evaluate their students or conduct their experiments.

While protected against a broader range of sanctions than freedom of expression, academic freedom is subjected to a stricter and more specific set of limitations that are best enforced by academic authorities, governments, or peer groups, depending on the case. For example, universities assign to their professors the task of teaching specific subjects to specific categories of students at specific times; governments require university experiments on human beings to be conditioned on the latter’s informed consent; and peer groups penalise sloppy research by preventing it from being subsidised or published.

The relevant question, therefore, is not whether academics should enjoy academic freedom, but rather how extensive this freedom should be. And the academics’ answer is: very. Extensive academic freedom is needed for the effective accomplishment of academia’s missions, and restrictions are only legitimate if they contribute to the effective and lasting accomplishment of these missions: the production, transmission and dissemination of knowledge.

1834: Université catholique versus Université libre

A fascinating book just published, Academische vrijheid. Een Leuvense geschiedenis documents how long it took for the University of Louvain/Leuven to achieve a protection of academic freedom that can plausibly claim to meet this criterion.

The old university, born in 1425, was brutally abolished by the French revolutionary troops in 1797. In 1834, shortly after the creation of the independent Kingdom of Belgium, its catholic bishops decided to found the Université catholique de Belgique. Briefly located in Mechelen, the seat of the archdiocese, it was renamed Université catholique de Louvain the following year, when the bishops took hold of the old university buildings in Leuven (where it operated -in French only until the 1920s, in French and Dutch until the 1970s, and in Dutch only since then, as the French section moved the new site of Louvain-la-Neuve).

Two weeks after the founding of the Université catholique de Belgique, Brussels’ freemasons founded the Université libre de Belgique, renamed Université libre de Bruxelles in 1842. Both universities were taking advantage of the “freedom of education” recognised by Belgium’s 1831 liberal constitution. Unlike the state universities of Ghent and Liège founded by the Dutch king two decades earlier, they did not rely on government resources, but in one case on the money collected one Sunday a year in every Belgian parish and in the other, on the generosity of the country’s liberal bourgeoisie.

This financial autonomy helped secure one important dimension of academic freedom: freedom from interference by the government. In militant contrast to the Catholic university, the Université libre de Bruxelles was determined to go further. It adopted as its motto “Scientia vincere tenebras”, where “tenebras” was meant to refer to religious obscurantism. Its student anthem, still sung today on official occasions, is called “A bas la calotte”, with “calotte” referring to the skullcap that priests wore. And the fundamental principle which its professors are still required to adhere to is “libre examen”, best translated as free inquiry.

What the newly created Université catholique de Louvain expected from its professors was quite different. Article 21 of its first statutes bluntly stated: “Academic education must be in harmony with the principles of the Catholic religion. Professors are obliged not only to refrain from teaching anything contrary to religion, but also to use the opportunities offered by the subject of their courses to teach students that religion is the basis of the sciences, and to instil in them a devotion to religion and to the duties it imposes.”

Bishops against freedom

At one time, around one-fifth of Louvain’s professors were Catholic priests. They were not allowed to publish without an “imprimatur”, granted by their bishop. The first set of professors included a certain Gérard Casimir Ubaghs, a philosopher and priest who developed a philosophical theory inspired by the influential liberal-Catholic French priest Félicité Lamennais. The bishops of Liège and Bruges did not like that at all. They complained to the Congregation of the Index prohibitorum librorum in Rome, which forced Ubaghs to change his views. Rector De Ram took Ubaghs’ defence. But the bishops would not budge. They banned priests from their dioceses at the university and suspended the annual collection of money until Ubaghs was sacked in 1866, after the rector’s death.

Priests were more strictly scrutinised, but the academic freedom of other professors was not immune from interference by religious authorities. Just one example. The English biologist St. George Jackson Mivart, author of The Genesis of Species, discussed and criticised by Darwin, obtained a doctorate in medicine at the University of Louvain in 1884. In 1890, he was appointed to teach natural philosophy at the university’s Philosophy Institute, which had been founded the previous year by the future archbishop Désiré Mercier with the support of Pope Leo XIII. Mercier was the driving force behind Neo-Thomism, which saw science and religion as two distinct paths towards the truth, perfectly compatible with one another. He created chairs of cosmology and experimental psychology in his institute, and Mivart was in charge of teaching the theory of evolution. However, the bishops resented his presence at Louvain, and he was forced to resign in 1894, despite Mercier’s opposition.

The grip of the religious authorities on teaching and research gradually weakened over the following decades. After World War I, the research conducted at the university sought funds from sources beyond the church as it aspired to international recognition. The rectors therefore fought for more autonomy from the bishops, while the bishops sometimes fought for more autonomy from Rome. Thus, Mercier, by then appointed archbishop, made a half-successful trip to Rome around 1925 to try to prevent the Vatican’s Biblical Commission from censoring the writings on evolution theory (once again) by geology professor and priest Henry de Dorlodot.

Georges Lemaître was another Louvain professor and priest. An esteemed friend of Albert Einstein, he had a doctorate in mathematics from Louvain and a doctorate in physics from MIT. In 1931, he published his ground-breaking Big Bang theory that conjectured (later empirically verified) that the universe is expanding. He took great care to emphasise that his conjecture, if correct, neither confirmed nor refuted the thesis that the universe was created by God. The Catholic authorities left him in peace. From then on, there seems to be no trace of religion-based interference with the work of Louvain’s natural scientists.

Abrupt dismissal

Philosophy is another matter. According to the first “règlement général” of the university, enacted in the 1830s, philosophy classes had a very specific role to play. They were compulsory for first-year students in all Faculties and were charged with teaching them “the fundamental truths of religion.”

I started my lecturing career at the Université catholique de Louvain in September 1980 by teaching precisely one of those compulsory philosophy classes, to first-year students in economic, social and political sciences. I had just returned to Belgium with my DPhil in philosophy from Oxford and was asked to replace a young priest who had announced that he was marrying one of his doctoral students (soon to become a member of one of Portugal’s first democratic governments) and had been dismissed on the spot. The disgraced priest had himself been acting as a substitute for our colleague André Léonard, Belgium’s future archbishop, who was busy directing the seminary he had just founded in Louvain-la-Neuve.

After three years, I was abruptly sacked from this teaching assignment without any warning by rector Monseigneur Edouard Massaux. When I asked him for the reason, all he said was, “It is possible that some people don’t like some of the ideas you present in the course.” To the indignant president of the philosophy institute, he was somewhat more specific: “There are too many Marxists in that Faculty.”

I have never been a Marxist, but I did devote one session, in the political philosophy part of the course, to a presentation of the Marxist critique of capitalism. I do not believe that, in 1980, the rector still expected first-year philosophy courses to teach, as in 1834, “the fundamental truths of religion”. But he probably expected me to warn my students, if I chose to include something on Marx, that anything coming from someone who wrote that religion was “the opium of the masses” should be looked at with the greatest suspicion.

Incidentally, it is quite amazing how Marx can still trigger such anguish over a century after his death, not least among people unlikely to have ever read one line by him. "We are going to choke off the money to schools that aid the Marxist assault on our American heritage and on Western civilization itself,” Donald Trump declared in a speech in Florida in 2023.

The defeated tenebras

I was deeply disappointed by my abrupt sacking, as I was convinced that both the content and style of my course was what my students needed — rather than the off-putting overview of the history of philosophy that my predecessors used to teach. But I had a comfortable tenured research position and the teaching job was given to a friend who needed it more than me. Hence, I did not make a fuss — although I perhaps should have done, as a potential modest contribution to the elimination of the last remnants of blatantly illegitimate religion-inspired infringements of academic freedom.

Massaux was the last priest to serve as the university rector and it is unlikely that there will ever be another one. One of the first acts of his successor, rector Pierre Macq, was to ask me to give that philosophy class again. A few years later he invited me to set up and direct the Hoover Chair of Economic and Social Ethics, whose role, admittedly, involved some “preaching”, but not the preaching of “the fundamental truths of religion”.

The history of the Université catholique de Louvain since 1834 can therefore be fairly described as a somewhat hobbling march towards full academic freedom from interference by religious authorities — a gradual adoption, some would say, of the Université libre de Bruxelles' principle of “libre examen”. In Louvain no less than in Brussels, scientia seems to have defeated the tenebras.

True, the archbishop of Belgium is still the Grand Chancellor of both the Dutch-speaking and French-speaking universities (officially split since 1970), but what this means has shrunk dramatically over the decades. In November 2011, André Léonard, by then appointed archbishop, accepted my invitation to a dialogue with my hundreds of bachelor students. He told them straight away: “Yes, I do chair the ‘pouvoir organisateur’ of your university, but this ‘organizing power’ has two characteristics: it has no power and it organizes nothing.” The recent replacement of “Katholieke Universiteit Leuven and “Université catholique de Louvain” by “KU Leuven” and “UCLouvain” as the universities' public names can be interpreted as a discrete consecration of this evolution.

Overshooting?

However, an incident that triggered some media attention a few years ago may make one wonder whether this emancipation from the tenebras is overshooting. In March 2017, a part-time lecturer was teaching a moral philosophy class at UCLouvain when his lecture was recorded without his knowledge by a student and posted on YouTube. In his lecture, he presented some arguments against the Belgian legislation that legalised abortion. The university authorities looked into the matter and decided to suspend him and not renew his teaching contract.

I was shocked. Surely, professors should be allowed to address controversial moral issues in moral philosophy classes, to make an empathic presentation of a view that deviates from what is currently considered politically correct, and to make clear that this view is also their own. What can be problematic, however, is a confusion between teaching and proselytising.

On controversial issues, it is important to make students aware of the diversity of positions and the strength of the arguments supporting each of them. This can be done through a balanced presentation by the teachers themselves, but also through assigning students the task of defending contradictory positions as best they can or through inviting guest speakers chosen for their ability to defend intelligently positions that differ from those of the teacher. Even in barely controversial subjects, the aim must not be to pass on knowledge as dogma, but to explain based on which observations and reasonings a consensus has been reached.

To judge whether, in this particular case, a teaching position was misused for proselytising purposes, one would need to know more about how the segment posted on the internet fitted into the rest of the course. However, academic authorities should resist the temptation to use sanctions selectively to punish those whose actual or apparent teaching deviates from their own convictions or from the quasi-consensus that happens to prevail. In the Catholic world, those who argued that abortion should be legalised used to face serious trouble. The university should not try to get forgiveness for its past sins by hypercorrectly chastising those who argue now that the legalisation of abortion has gone too far.

Threatened by peer pressure?

It is no bad thing for a university to have non-conformist, even eccentric, personalities among its teaching staff. There are no doubt excesses to be avoided. To keep these in check, however, one must not rely on denunciations of the kind encouraged today by MAGA fans in American universities or on disciplinary procedures that restrict academic freedom from the top. One must rather rely, as far as possible, on an anti-dogmatic yet responsible ethos that is sufficiently shared within the university community and on the informal sanctions by peers that go with it.

Academic freedom, however, might also be undesirably restricted by the peer group. In a book entitled Is links gewoon slimmer? Ideologie aan onze universiteiten (Is the left simply smarter? Ideology in our universities, Leuven, 2023), Andreas De Block, professor of philosophy of science at KU Leuven, reflects on the causes and consequences of the — apparently well established — overrepresentation of (loosely defined) left-wing electoral preferences and points of view among university professors.

Combined with selection and self-selection, peer pressure, in such a context, reduces the diversity of the views to which students are exposed, of the questions that researchers investigate and of the positions publicly expressed by academics. This tends to breed, intentionally or not, consciously or not, the so-called cancel culture that affects all universities, whether committed to “libre examen” or not.

This lack of diversity, De Block further argues, is detrimental to the universities’ core missions of producing, transmitting and disseminating knowledge. It is also instrumental in undermining the trust enjoyed by academic “experts” among a broad section of the general public. Consequently, it provides some governments with convenient pretexts to curtail their financial support to universities and attack their academic freedom to an extent not seen for decennia in the “free” world.

A staunch resistance to these attacks is imperative. But it is compatible with an honest permanent reflection on the contours of academics’ indispensable academic-freedom-restricting professional ethos. Do the contours of the peer pressure that enforces this ethos match what is needed for the optimal accomplishment of the universities’ three core missions? Or does this peer pressure sometimes include an undesirable restriction of diversity that badly hinders the pursuit of these missions?

The legitimate privilege of academic freedom can be threatened from within as well as from above. Attacks from above are generally more brutal, more damaging and more visible. But vigilance is in order on all fronts.

brusselstimes.com - Philippe Van Parijs

[–] kernelle@0d.gs 3 points 4 days ago

If we look at historic crashes, they had major catalysts causing mass sell orders. Right now markets have had time to adjust because the speed of decline has been very slow.

Markets are also largely speculative, many stocks are traded way above their fundamental value (think Microsoft, tesla, or coca-cola). These will probably be hit the hard, algorithms will default to what a stock should be and drop hard. But these companies might have the strongest chance to bounce back as well.

Companies with the strongest books will be safer, but many more risk taking companies won't be as lucky. This is part of what due diligence of a stock will tell you, but also probably one of the hardest parts of investing.

As long as decline is slow, stability can be found. But when uncertainty rises fast, so does the unstability of the stock market. Catalysts such as the public losing confidence in banks causing a bank run, companies downsizing at unseen scales to cut costs, or global political instability are possible.

TLDR: it needs to get way worse, very quickly for the market to crash

[–] kernelle@0d.gs 10 points 1 week ago

My first year professor in electronics started his first lecture "yeah so forget everything you've learned about electricity because it's wrong" - then gave out an infinite matrix of resistors and made us cry.

[–] kernelle@0d.gs 2 points 1 week ago

Different cultures! Dietary restrictions aren't optional though

[–] kernelle@0d.gs 3 points 1 week ago (2 children)

I might fundamentally disagree with you what a restaurant is. For me it's a place where hard working people get to share their cuisine with you. Most I'll ask at a restaurant is one alteration to one dish.

When I read the OP and your post, a restaurant seems like the place for you to get the perfect meal.

As Beau Miles puts it: "I plan on regretting what I'm eating at least once this week"

[–] kernelle@0d.gs 12 points 1 week ago

That episode was probably one of the best in the series! If someone's wondering: Game Changer by dropout S7E1

PS: I've been here the whole time!

[–] kernelle@0d.gs 14 points 1 week ago (1 children)

I was perfectly okay not knowing about this

[–] kernelle@0d.gs 2 points 1 week ago
[–] kernelle@0d.gs 4 points 2 weeks ago

Nice to hear! I'm glad you enjoyed it.

 

TLDR: Testing some CDNs to reveal Vercel and GitHub Pages as the fastest/most reliable for static solutions and Cloudflare for a self-hosted origin.

The Problem

In my previous post, I achieved loadtimes in Europe of under 100ms. But getting those speeds worldwide is a geographical hurdle. My preferred hosting location has always been London because of its proximity to the intercontinental submarine fiber optic network providing some of the best connectivity worldwide.

Heatmap of latency

Azimuthal Projection: Measuring the latency of 574k servers around the world from my Lemmy server in London

But it's a single server, in a single location. From the heatmap we can see the significant fall-off in response times past the 2500km line, such that a visitor from Tokyo has to wait around 8 times longer than their European counterparts.

Free Web

The answer is obvious, a Content Delivery Network or CDN distributes a website in a decentralised fashion, always delivering from the closest server to the user, drastically reducing loadtimes.

I could be renting servers on every continent and make my own CDN, with blackjack and .. never mind! Instead, I'll be using existing and free infrastructure. While the internet is forever changing to accommodate stock markets, many companies, organisations, and individuals still offer professional grade services for free. We cannot understate the value of having such accessible and professional tools ready for experimenting, exploring, and learning with no barrier of entry.

These constraints do not provide an answer to how good CDNs are, but rather how much resources they allocate to their free tiers.

Pull vs Push CDN

Pull CDN

A Pull CDN has to request the data from an origin server, whereas a Push CDN pre-emptively distributes code from a repository worldwide.

For our static one-pager a Push CDN is a dream, because they deploy your repository to distribute it worldwide in an instant and keep it there. Pull CDNs also store a cached version but needs our origin server just as frequently. The first visitor in a particular area might be significantly slower if a closer server has not yet cached from the origin. This doesn't mean Push CDNs can't be used for complex websites with extensive back-ends, but it adds complexity, optimization, and cost to the project.

Push CDN

Map: Measuring round trip time to my site when using a push CDN

Edge vs Regional vs Origin

CDNs cache in different ways, at different times, using different methods. But all describe Edge Nodes - literally the edge of the network for the fastest delivery, and Regional Nodes when Edge Nodes or "Points of Presences" need to be updated.

Origin Nodes are usually only used with Pull CDNs so the network knows what content to serve when no cache is available, so an Edge Node asks the Regional Node what the Origin has to serve. Unfortunately, that means a CDN without a minimum amount of resources will be slower than not using one at all.

Where cache is stored with Push CDNs also depends on the provider but often use a repository that automatically updates the entire network with a new version. Meaning they can cache much more aggressively and efficiently across the network, resulting in faster loadtimes.

Testing

I know, I know, you're here for numbers, so let me start with some: 3 rounds of testing 6 continents 98 times each for a combined total of 588 requests spread over 34 days. Every test cycle consists of one https request per continent, using the GlobalPing network and a simple script I've written to interface with their CLI-tool.

Different times between requests will provide us with insights on how much resources are allocated for us regular users. We're looking for the CDN that's not only the fastest but also the most consistent in its loadtimes.

Included is a demonstration of a badly configured CDN, actually just chaining two CDNs. Without the two platforms talking to each other, the network gets confused and it more than doubles the loadtimes.

Finally, I've included ioRiver - the only platform I've found with Geo-Routing on a free trial. This would allow me to serve Europe with my own server and the rest of the world with an actual CDN. The first 2 testing rounds, I configured ioRiver to only serve with Vercel for a baseline test on how much, if any, delay they add. In the 3rd round, ioRiver routed Europe to my server, and the rest of the world to Vercel.

Results

We should be expecting my self-hosted solution to the deliver the same speeds every round, this is our baseline. Any CDN with a slower average than my single Self-Hosted solution is not worth considering.

  • Round 1: 3 days of requesting once every hour (72 tests)
  • Round 2: 7 days of requesting once every 12 hours (14 tests)
  • Round 3: 24 days of requesting once every 48 hours (12 tests)

Round 1 - data

Graph Round 1

Frequent requests to a CDN ensures the website is cached not only in faster memory spaces but also to a more diverse spread of edge servers.

  • Pull CDNs show their strong advantage by not needing an extra request to my server
  • ioRiver (Multi-CDN) is setup to only serve Vercel, we can see it adds a considerable delay
  • Vercel, GitHub Pages, and Cloudflare (Pull) show themselves to be early leaders

Round 2 - data

Graph Round 2

Most reflective of a regular day on the site (all charts are ordered by this round)

  • Some CDNs already reflect slightly slower times due to not being cached as frequently
  • GitHub stands out to me in this round of testing being a little more stable than the previous round

Round 3 - data

Graph Round 3

†I didn't take Static.app's 30 day trial into account when testing, which is why it's absent from this final round.

  • Surprisingly enough for Cloudflare we notice their Pull version pulling ahead of their Push CDN
  • Adding my Self-Hosted solution to ioRiver's Multi-CDN via Geo-Routing to Europe shows it can genuinely add stability and decrease loadtimes

Notes

It's pretty clear by now Vercel is throwing more money at the problem, so it shouldn't come as a surprise they've set a limit on the monthly requests: a respectable 1 million or 100GB total per month. For evaluation, I've changed my website to start hosting from Vercel.

GitHub's limits are described as a soft bandwidth limit of 100GB/month, more like a gentleman's agreement.

Same as last time, I'll be leaving up the different deployments for another month probably.

Code / Scripts on GitHub

These scripts are provided as is. I've made them partly configurable with CLI, but there are also hard-coded changes required if you're planning on using them yourself.

CDN Test

ping.py

Interface with GlobalPing's CLI tool, it completes a https request for every subdomain or different deployment from every continent equally with many rate limiting functions. In hindsight, interfacing with their API might've been a better use of time...

parseGlobalPing.py

Parses all files generated by GlobalPing during ping.py, calculate averages, and returns this data in pretty print or CSV (I'm partial to a good spreadsheet...). Easy to tweak with CLI arguments.

CDN Testing Round

Ping every 12h from every continent (hardcoded domains & time)
$ python3 ping.py -f official_12h -l 100 
Parse, calculate, and pretty print all pings
$ python3 parseGlobalPings.py -f official_12h

Heatmap

masscan - discovery

masscan 0.0.0.0/4 -p80 --output-format=json --output-filename=Replies.json --rate 10000

Scans a portion of the internet for servers with an open port 80, traditionally used for serving a website or redirect.

hpingIps.sh - measurement

Due to masscan not recording RTT's, I used hping for the measurements. Nmap is a good choice as well but hping is slightly faster. I've found MassMap after my scan, which wraps Masscan and Nmap nicely together. I'll update this when I've compared its speed compared to my implementation.

This is a quick and dirty script to use hping and send one packet to port 80 of any host discovered by masscan.

query.py - parse and locate

Primary and original function is to query the GeoLite2 database with an IP-address to give a rough estimate of their physical location to plot a heatmap. Now it can also estimate the distance between my server and another using Haversine.

plot.py

Creates heatmap with the output of query.py (longitude, latitude, and RTT) using Matplotlib

query.py and plot.py are forked from Ping The World! by Erik Bernhardsson, but is over 10 years old. The new scripts fixed many issues and are much improved.

Graph plot

Masscan (Command mentioned above)
# Replies.json

Masscan -> IPList
$ python3 query.py --masscan > IPList.txt

IPList -> RTT
# sh hpingIps.sh

RTT -> Combinedlog
$ python3 query.py > log_combined.txt

CombinedLog -> Plot
$ python3 plot.py

./Martijn.sh > Blog / How I made a blog using Lemmy / Measuring the latency of 574k servers around the world from my lemmy server

2
submitted 2 weeks ago* (last edited 2 weeks ago) by kernelle@0d.gs to c/self@0d.gs
 

TLDR: Testing some CDNs to reveal Vercel and GitHub Pages as the fastest/most reliable for static solutions and Cloudflare for a self-hosted origin.

The Problem

In my previous post, I achieved loadtimes in Europe of under 100ms. But getting those speeds worldwide is a geographical hurdle. My preferred hosting location has always been London because of its proximity to the intercontinental submarine fiber optic network providing some of the best connectivity worldwide.

Heatmap of latency

Azimuthal Projection: Measuring the latency of 574k servers around the world from my Lemmy server in London

But it's a single server, in a single location. From the heatmap we can see the significant fall-off in response times past the 2500km line, such that a visitor from Tokyo has to wait around 8 times longer than their European counterparts.

Free Web

The answer is obvious, a Content Delivery Network or CDN distributes a website in a decentralised fashion, always delivering from the closest server to the user, drastically reducing loadtimes.

I could be renting servers on every continent and make my own CDN, with blackjack and .. never mind! Instead, I'll be using existing and free infrastructure. While the internet is forever changing to accommodate stock markets, many companies, organisations, and individuals still offer professional grade services for free. We cannot understate the value of having such accessible and professional tools ready for experimenting, exploring, and learning with no barrier of entry.

These constraints do not provide an answer to how good CDNs are, but rather how much resources they allocate to their free tiers.

Pull vs Push CDN

Pull CDN

A Pull CDN has to request the data from an origin server, whereas a Push CDN pre-emptively distributes code from a repository worldwide.

For our static one-pager a Push CDN is a dream, because they deploy your repository to distribute it worldwide in an instant and keep it there. Pull CDNs also store a cached version but needs our origin server just as frequently. The first visitor in a particular area might be significantly slower if a closer server has not yet cached from the origin. This doesn't mean Push CDNs can't be used for complex websites with extensive back-ends, but it adds complexity, optimization, and cost to the project.

Push CDN

Map: Measuring round trip time to my site when using a push CDN

Edge vs Regional vs Origin

CDNs cache in different ways, at different times, using different methods. But all describe Edge Nodes - literally the edge of the network for the fastest delivery, and Regional Nodes when Edge Nodes or "Points of Presences" need to be updated.

Origin Nodes are usually only used with Pull CDNs so the network knows what content to serve when no cache is available, so an Edge Node asks the Regional Node what the Origin has to serve. Unfortunately, that means a CDN without a minimum amount of resources will be slower than not using one at all.

Where cache is stored with Push CDNs also depends on the provider but often use a repository that automatically updates the entire network with a new version. Meaning they can cache much more aggressively and efficiently across the network, resulting in faster loadtimes.

Testing

I know, I know, you're here for numbers, so let me start with some: 3 rounds of testing 6 continents 98 times each for a combined total of 588 requests spread over 34 days. Every test cycle consists of one https request per continent, using the GlobalPing network and a simple script I've written to interface with their CLI-tool.

Different times between requests will provide us with insights on how much resources are allocated for us regular users. We're looking for the CDN that's not only the fastest but also the most consistent in its loadtimes.

Included is a demonstration of a badly configured CDN, actually just chaining two CDNs. Without the two platforms talking to each other, the network gets confused and it more than doubles the loadtimes.

Finally, I've included ioRiver - the only platform I've found with Geo-Routing on a free trial. This would allow me to serve Europe with my own server and the rest of the world with an actual CDN. The first 2 testing rounds, I configured ioRiver to only serve with Vercel for a baseline test on how much, if any, delay they add. In the 3rd round, ioRiver routed Europe to my server, and the rest of the world to Vercel.

Results

We should be expecting my self-hosted solution to the deliver the same speeds every round, this is our baseline. Any CDN with a slower average than my single Self-Hosted solution is not worth considering.

  • Round 1: 3 days of requesting once every hour (72 tests)
  • Round 2: 7 days of requesting once every 12 hours (14 tests)
  • Round 3: 24 days of requesting once every 48 hours (12 tests)

Round 1 - data

Graph Round 1

Frequent requests to a CDN ensures the website is cached not only in faster memory spaces but also to a more diverse spread of edge servers.

  • Pull CDNs show their strong advantage by not needing an extra request to my server
  • ioRiver (Multi-CDN) is setup to only serve Vercel, we can see it adds a considerable delay
  • Vercel, GitHub Pages, and Cloudflare (Pull) show themselves to be early leaders

Round 2 - data

Graph Round 2

Most reflective of a regular day on the site (all charts are ordered by this round)

  • Some CDNs already reflect slightly slower times due to not being cached as frequently
  • GitHub stands out to me in this round of testing being a little more stable than the previous round

Round 3 - data

Graph Round 3

†I didn't take Static.app's 30 day trial into account when testing, which is why it's absent from this final round.

  • Surprisingly enough for Cloudflare we notice their Pull version pulling ahead of their Push CDN
  • Adding my Self-Hosted solution to ioRiver's Multi-CDN via Geo-Routing to Europe shows it can genuinely add stability and decrease loadtimes

Notes

It's pretty clear by now Vercel is throwing more money at the problem, so it shouldn't come as a surprise they've set a limit on the monthly requests: a respectable 1 million or 100GB total per month. For evaluation, I've changed my website to start hosting from Vercel.

GitHub's limits are described as a soft bandwidth limit of 100GB/month, more like a gentleman's agreement.

Same as last time, I'll be leaving up the different deployments for another month probably.

Code / Scripts on GitHub

These scripts are provided as is. I've made them partly configurable with CLI, but there are also hard-coded changes required if you're planning on using them yourself.

CDN Test

ping.py

Interface with GlobalPing's CLI tool, it completes a https request for every subdomain or different deployment from every continent equally with many rate limiting functions. In hindsight, interfacing with their API might've been a better use of time...

parseGlobalPing.py

Parses all files generated by GlobalPing during ping.py, calculate averages, and returns this data in pretty print or CSV (I'm partial to a good spreadsheet...). Easy to tweak with CLI arguments.

CDN Testing Round

Ping every 12h from every continent (hardcoded domains & time)
$ python3 ping.py -f official_12h -l 100 
Parse, calculate, and pretty print all pings
$ python3 parseGlobalPings.py -f official_12h

Heatmap

masscan - discovery

masscan 0.0.0.0/4 -p80 --output-format=json --output-filename=Replies.json --rate 10000

Scans a portion of the internet for servers with an open port 80, traditionally used for serving a website or redirect.

hpingIps.sh - measurement

Due to masscan not recording RTT's, I used hping for the measurements. Nmap is a good choice as well but hping is slightly faster. I've found MassMap after my scan, which wraps Masscan and Nmap nicely together. I'll update this when I've compared its speed compared to my implementation.

This is a quick and dirty script to use hping and send one packet to port 80 of any host discovered by masscan.

query.py - parse and locate

Primary and original function is to query the GeoLite2 database with an IP-address to give a rough estimate of their physical location to plot a heatmap. Now it can also estimate the distance between my server and another using Haversine.

plot.py

Creates heatmap with the output of query.py (longitude, latitude, and RTT) using Matplotlib

query.py and plot.py are forked from Ping The World! by Erik Bernhardsson, but is over 10 years old. The new scripts fixed many issues and are much improved.

Graph plot

Masscan (Command mentioned above)
# Replies.json

Masscan -> IPList
$ python3 query.py --masscan > IPList.txt

IPList -> RTT
# sh hpingIps.sh

RTT -> Combinedlog
$ python3 query.py > log_combined.txt

CombinedLog -> Plot
$ python3 plot.py

./Martijn.sh > Blog / How I made a blog using Lemmy / Measuring the latency of 574k servers around the world from my lemmy server

[–] kernelle@0d.gs 6 points 2 weeks ago

They each fuck with my window arrangement on virtual desktops when rebooting in their own special way. I've switched to Wayland but x11 did feel more polished.

[–] kernelle@0d.gs 1 points 3 weeks ago

Thanks for sharing!

[–] kernelle@0d.gs 17 points 3 weeks ago (1 children)

I like your funny words, magic man

0
submitted 1 month ago* (last edited 1 month ago) by kernelle@0d.gs to c/self@0d.gs
 

Promoted by the indieweb and keeping old internet traditions alive, webring websites link to eachother in a circular fashion. Often likeminded individuals with common interests or themes, in my case the Fediring. Following the links or arrows, you'll find a large community of amazing and interesting people on the Fediverse.

Other interesting webrings:

 

Recently launched and has the ability to display any picture and photograph it with the Earth as a backdrop. 'I Spent $5,000,000 So You Can Go To Space For FREE' as he puts it. Clickbait debate aside, he provides something insane for free to anyone.

This detail was mentioned on ny2yo.com, linked on their website space.crunchlabs.com

Full description

Take selfies with Earth in the background.

SAT GUS will enable users from around the world to upload their photos via spaceselfie.com and specify their city. Using Redwire’s flight-proven camera technology, SAT GUS will capture HDR pictures of user-submitted selfies that will be displayed on a Google Pixel phone onboard SAT GUS, with Earth as the backdrop. The photos will then be transmitted to Earth.

The SAT GUS mission is part of a unique science, technology, engineering, art, and mathematics (STEAM) activation that raises awareness of the impact space has on our daily lives and will support underserved engineering students around the world.

Thanks to everyone who helped with this incredible build:

  • Our Build team at Tyvak International for helping us bring SAT GUS to life!
  • SCHOTT for providing radiation resistant glass.
  • REDWIRE for providing the Space hardened camera.
  • Muon Space for thermal vacuum testing.
  • The Vibrational Testing Laboratory of Centrotecnica Srl.

The SAT GUS satellite, designed and built by Tyvak International of Milan Italy, aims to allow people to snap selfies in space, with Earth as the backdrop.

The “Space Selfie project”, launched by CrunchLabs, an initiative founded by Mark Rober, a former NASA engineer and YouTube content creator, aims to send participants’ selfies to space, display them on a satellite-mounted phone, and capture a photo with Earth in the background before sending it back to the participant.

 

Trouble in Bruges

When tourists say they wish they could take a piece of their favorite place home with them, sometimes they mean it a bit too literally.

Belgium’s picturesque city of Bruges has issued a request that tourists stop stealing cobblestones from its UNESCO-recognized medieval streets.

Local politician Franky Demon says an estimated 50 to 70 cobblestones disappear per month — even more during peak season — and it costs 200 euros (about $225) per square meter to replace them and fix the damage.

cnn.com

 

Located in London, I measured the RTT or round trip time to 574,691 random webservers and plotted the times on the globe.

Discovery was done with masscan, measurements using hping and plotting with an old Python script I've revived and enhanced.

~~This is part of the next writeup on my blog, with which I will be posting any of the code I've used.~~

Full write-up and code

Blog / How I made a blog using Lemmy

 

The academic halls of Harvard Kennedy School in Massachusetts welcomed a new face in 2024 - not just another high-achieving student but a future monarch.

Princess Elisabeth, who turned 23 on October 25, 2024, began a two-year master's programme in public policy last September. While she may not yet be a familiar name in the United States, she's a royal figure destined to shape history.

ndtv.com

 

Belgium could potentially have additional F-35 fighter jets manufactured in Italy instead of the United States, according to Defence Minister Theo Francken.

Belgium has already ordered 34 F-35 jets from US manufacturer Lockheed Martin, with production currently based in Texas. However, Francken plans for any additional jets to be produced at Lockheed Martin’s facility in Italy.

brusselstimes.com

1
submitted 2 months ago* (last edited 1 month ago) by kernelle@0d.gs to c/self@0d.gs
 

This is a followup to my introduction of BlogOnLemmy, a simple blog frontend. If you haven't seen it, no need because I will be explaining how it works and how you can run your own BlogOnLemmy for free.

Leveraging the Federation

Having a platform to connect your content to likeminded people is invaluable. The Fediverse achieves this in a platform agnostic way, so in theory it shouldn't matter which platform we use. But platform have different userbases that interact with posts in different ways. I've always preferred the forum variety, where communities form and discussion is encouraged.

My posts are shared as original content on Lemmy, and that's who it's meant for. Choosing for a traditional blog style to make a more palatable platform for a wider audience, and in this way also promoting Lemmy.

Constraints

Starting off I did not want the upkeep of another federated instance. Not every new thing that is deployed on the Fediverse needs to stand on its own or made from the ground up as an ActivityPub compatible service. But rather use existing infrastructure, already federated, already primed for interconnectivity. Taking it one step further is not a having a back-end at all, a 'dumb' website as it were. Posts are made, edited, and cross-posted on Lemmy.

The world of CSS and JavaScript on the other hand - how websites are styled and made feature-rich - is littered with libraries. Being treated like black boxes, often just a few functions are used with the rest clogging up our internet experience. Even jQuery, which is used by over 74% of all websites, is already 23kB in its smallest form. I'm not planning on having the smallest possible footprint†, but rather showing a modern web browser provides an underused toolset of natively supported functionality; something the first webdevs would have given their left kidney for.

Lastly, to improve maintainability and simplicity, one page is enough for a blog. Provided that its content can be altered dynamically.

See optimization

How it's made

Graphviz

1) URL: Category/post

Even before the browser completely loads the page, we can take a look at the URL. With our constraints only two types of additions are available for us, the anchor and GET parameters. When an anchor, or '#', is present websites scroll to a specific place in a website after loading. We can hijack this behavior and use it to load predefined categories. Like '#blog' or '#linkdumps'. For posts, '#/post/3139396' looks nicer than '?post=3139396', but anchors are rarely search engine compatible. So I'm extracting the GET parameter to load an individual post.

Running JavaScript before the page has done loading should be swift and easy, like coloring the filters or setting Dark/Light mode, so it doesn't delay the site.

2) API -> Lemmy

A simple 'Fetch' is all that's required. Lemmy's API is extensive already, because it's used by different frontends and apps that make an individual’s experience unique. When selecting a category, we are requesting all the posts made by me in one or more lemmy communities. A post or permalink uses the same post_id as on the Lemmy instance. Pretty straight forward.

3) Markdown -> HTML

When we get a reply from the Lemmy instance, the posts are formatted in Markdown. Just as they are when you submit the post. But our browsers use HTML, a different markup language that is interpretable by our browsers. This is where the only code that's not written by me steps in, a Markdown to HTML layer called snarkdown. It's very efficient and probably the smallest footprint possible for what it is, around 1kB.

Optimization

When my blog was launched, I was using a Cloudflare proxy, for no-hassle https handling, caching and CDN. Within the EU, I'm aiming for sub-100ms† to be faster than the blink of an eye. With a free tier of Cloudflare we can expect a variance between 150 and 600ms at best, but intercontinental caching can take seconds.

Nginx and OpenLiteSpeed are regarded as the fastest webservers out there, I often use Apache for testing but for deployment I prefer Nginx's speed and reliability. I could sidetrack here and write another 1000 words about the optimization of static content and TLS handling in Nginx, but that's a story for another time.

For the website, API calls are made asynchronously while the page is loaded and are not counted

Mythical 14kB, or less?

All data being transferred on the internet is split up into manageable chunks or frames. Their size or Maximum Transmission Unit, is defined by IEEE 802.3-2022 1.4.207 with a maximum of 1518 bytes†. They usually carry 1460 bytes of actual application data, or Maximum Segment Size.

Followed by most server operating systems, RFC 6928 proposes 10x MSS (= Congestion Window) for the first reply. In other words, the server 'tests' your network by sending 10 frames at once. If your device acknowledges each frame, the server knows to double the Congestion Window every subsequent reply until some are dropped. This is called TCP Slow Start, defined in RFC 5681.

10 frames of 1460 bytes contain 14.6kB of usable data. Or at least, it used to. The modern web changed with the use of encryption. The Initial Congestion Window, in my use case, includes 2 TLS frames and from each frame it takes away an extra 29 bytes. Reducing our window to 11.4kB. If we manage our website to fit within this first Slow Start routine, we avoid an extra round trip in the TCP/IP-protocol. Speeding up the website as much as your latency to the server. Min-maxing TCP Traffic is the name of the game.

Can vary with MTU settings of your network or interface, but around 1500 (+ 14 bytes for headers) is the widely accepted default

10kB vs 15kB with TCP Slow Start

Visualizes two raw web requests, 10.7kB vs 13.3kB with TCP Slow Start

  • Above Blue: Request Starts
  • Between Green: TLS Handshake
  • Inside Red: Initial Congestion Window

Icons

Icons are tricky, because describing pixel positions takes up a considerable amount of data. Instead SVG's are commonplace, creating complex shapes programmatically, and significantly reducing its footprint. Feathericons is a FOSS icon library providing a beautiful SVG rendered solution for my navbar. For the favicon, or website icon, I coded it manually with the same font as the blog itself. But after different browsers took liberties rendering the font and spacing, I converted it to a path traced design. Describing each shape individually and making sure it's rendered the same consistently.

Regular vs. Inline vs Minified

If we sum up the filesizes we're looking at around 50kB of data. Luckily servers compress† our code, and are pretty good at it, leaving only 15kB to be transferred; just above our 11kB threshold. By making the code unreadable for humans using minifying scripts we can reduce the final size even more. Only... the files that make up this blog are split up. Common guidelines recommend doing so to prevent one big file clogging up load times. For us that means splitting up our precious 11kB in multiple round trips, the opposite of our goal. Inline code blocks to the rescue, with the added bonus of the entire site now being compressed into one file making the compression more efficient to end optimization at a neat 10.7kB.

The Web uses Gzip. A more performant choice today is Brotli, which I compiled for use on my server

In Practice

All good in theory, now let's see the effect in practice. I've deployed the blog 4 times, and each version was measured for total download time from 20 requests. In the first graph we notice the impact of not staying inside the Initial Congestion Window, where only the second scenario is delayed by a second round trip when loading the first page.

Scenario 1. and 3. have separate files, and separate requests are made. Taking priority in displaying the website, or the first file, but neglecting potential useable space inside the init_cwnd. We can tell when comparing the second graph, it ends up almost doubling their respective total load times.

The final version is the only one transferring all the data in one round trip, and is the one deployed on the main site. With total download times as low as 51ms, around 150ms as a soft upper limit, and 85ms average in Europe. Unfortunately, that means worldwide tests show load times of 700ms, so I'll eventually implement a CDN.

Speedtest 4 scenarios

  1. Regular (14,46kB): no minification, separate files
  2. Inline (13,29kB): no minification, one file
  3. Regular Minified (10,98kB): but still using separate files
  4. Inline Minified (10,69kB): one page as small as possible

~~I'll be leaving up dev versions until there's a significant update to the site~~

Content Delivery Network

Speeds like this can only be achieved when you're close to my server, which is in London. For my Eurobros that means blazing fast response times. For anyone else, cdn.martijn.sh points to Cloudflare's CDN and git.martijn.sh to GitHub's CDN. These services allow us to distribute our blog to servers across the globe, so requesting clients always choose the closest server available.

GitHub Pages

An easy and free way of serving a static webpage. Fork the BlogOnLemmy repository and name it 'GitHub-Username'.github.io. Your website is now available as username.github.io and even supports the use of custom domain names. Mine is served at git.martijn.sh.

While testing its load times worldwide, I got response times as low as 64ms with 250ms on the high end. Not surprisingly they deliver the page slightly faster globally than Cloudflare does, because they're optimizing for static content.

Extra features

  • Taking over the Light or Dark mode of the users' device is a courtesy more than anything else. Adding to this, a selectable permanent setting. My way of highlighting the overuse of cookies and localStorage by giving the user the choice to store data of a website that is built from the ground up to not use any.
  • A memorable and interactable canvas to give a personal touch to the about me section.
  • Collapsed articles with a 'Read More'-Button.
  • 'Load More'-Button loads the next 10 posts, so the page is as long as you want it to be

Webmentions

Essential for blogging in current year, Webmentions keep websites up-to-date when links to them are created or edited. Fortunately Lemmy has got us covered, when posts are made the first instance sends a Webmention to the hosters of any links that are mentioned in the post.

To stay within scope I'll be using webmention.io for now, which enables us to get notified when linked somewhere else by adding just a single line of HTML to our code.

Notes

  • Enabling HTTP2 or 3 did not speed up load times, in fact with protocol negotiations and TLS they added one more packet to the Initial Congestion Window.
  • For now, the apex domain will be pointing directly to my server, but more testing is required in choosing a CDN.
  • Editing this site for personal use requires knowledge of HTML and JS for now, but I might create a script to individualize blogs easier.

GitHub | ./Martijn.sh > Blog

 

Almost 10 years ago at 17, Joeri found himself in a horrifying blind spot accident. After being dragged 600m by a truck he lost an arm, an eye and most part of his other hand. Pulling through as a symbol of strength and perserverance, he's managed to have a positive influence over road safety in Belgium. Having hiphop influences and a good support group, he decided to release a single to 'Begin' his musical career.

Today marks the release of his music video, a project of passion with amazing production quality. As a friend, I've decided to share his story and debut with an international audience. Don't worry, as it's rapped in a West-Flemish dialect, even the overwhelming majority of my country doesn't understand a word. Either way, I've translated the lyrics if you're curious.

Joeri Verbeeck - Begin [Translated lyrics]


(Actually they don't know how I... [feel])

I don't know how to Begin

Just words in sentences

But the shit I've been through

You couldn't even make it up

(You couldn't make it up)

I lost a lot by the road, but I'm here to Win

(Let's Go!)

Why are you looking? Am I in a film?

(In a film?)

I'm going to work, while you all just chill?

(Just chill?)

I do more than you, with only one arm!

I'm richer in perserverance, I'm definitely not poor

But still they're staring

Am I wearing your clothes? I only have one eye!

They tell me I gotta chill, I gotta keep calm

Take your time, but they didn't tell me time flies!

(that time flies)

You can't play with me like a nintendo!

No longer stuck in that blind spot

Next page, like a new book

Nothing happens if I do nothing

Thats why when I yell, I yell good

I jump from left to right like kangaroo

Klets in the pets, in the bottle [=fles], in de drets [putting anything before -ets is a yell, meaningless on its own]

And I flex on the mass and I'm not depressed

And I stress for the test in the first lesson

I thought it was my last day

But the world is still not rid of me, yeah!

I was 17, I hadn't seen anything

Almost my last ride, it wasn't that clean

Why did I have the see the street from that close?

(Why did I have the see the street from that close)

Nobody knows how I feel

It's like love isn't meant for me

Every day overwhelmed by it all

I'm a warm person, but let's it cool

(Actually they don't know how I... [feel])

 

Questions need to be recorded in video form; either in Dutch, French, or German.

view more: next ›