nonserf

joined 1 month ago
MODERATOR OF
[–] nonserf@libretechni.ca 1 points 2 weeks ago

Those are probably things I should look into. Considering those free-to-air networks are TV networks, MythTV would likely work for them. But then I have no idea the absence of video would cause any issues, considering a Satellite tuner device for a PC might just receive TV signals.

[–] nonserf@libretechni.ca 2 points 2 weeks ago* (last edited 2 weeks ago) (1 children)

I have no Internet. I want to hear the local broadcasts when I am at home.

At home, I have ~75—100 local broadcast stations which cover local news and events. I also figure that of the thousands of Internet stations, very few would likely be specific to my region. I think only a small fraction of broadcast radio stations have an Internet stream.

(edit)

When I am in a cafe or library getting Internet, I use that opportunity to listen to distant stations.

Note as well that a strong DAB signal is better than any Internet signal. There are many more points of failure with Internet, such as network congestion.

You do give me an idea though. I have some shell accounts. I could perhaps setup a timed recording of something I want to hear from Internet radio. Then I could fetch it whenever I get online. But I guess a MythRadio would still be useful.. something to show me the schedules centrally. I think at the moment we are stuck with going to the website of each station and navigating their UI one station at a time. Fuck that.

 

cross-posted from: https://libretechni.ca/post/321504

MythTV is a great tool for browsing broadcast TV schedules and scheduling recordings. It’s a shame so many people have suckered for cloud streaming services, which have a subscription cost and yet they collect data on you regardless. Broadcast TV lately has almost no commercial interruptions and of course no tracking. It’s gratis as well. If they bring in commercials, MythTV can auto-detect them and remove them.

FM and DAB radio signals include EPG. So the scheduling metadata is out there. But apparently no consumer receivers make use of it. They just show album art.

There are no jazz stations where I live. Only a few stations which sometimes play jazz. It’s a shame the EPG is not being exploited. Broadcast radio would be so much better if we could browse a MythTV schedule and select programs to record.

I suppose it’s not just a software problem. There are FM tuner USB sticks (not great). Nothing for DAB. And nothing comparable to the SiliconDust designs, which are tuners that connect to ethernet.

 

MythTV is a great tool for browsing broadcast TV schedules and scheduling recordings. It’s a shame so many people have suckered for cloud streaming services, which have a subscription cost and yet they collect data on you regardless. Broadcast TV lately has almost no commercial interruptions and of course no tracking. It’s gratis as well. If they bring in commercials, MythTV can auto-detect them and remove them.

FM and DAB radio signals include EPG. So the scheduling metadata is out there. But apparently no consumer receivers make use of it. They just show album art.

There are no jazz stations where I live. Only a few stations which sometimes play jazz. It’s a shame the EPG is not being exploited. Broadcast radio would be so much better if we could browse a MythTV schedule and select programs to record.

I suppose it’s not just a software problem. There are FM tuner USB sticks (not great). Nothing for DAB. And nothing comparable to the SiliconDust designs, which are tuners that connect to ethernet.

 

I have ongoing business with: banks, telecoms, energy suppliers, cloud services, etc.

They all have dynamic terms of service (ToS) and privacy policies. They may or may not notify me when they change it. If they bother to notify me, the msg always reads like this: “we are making changes to benefit you…” Yeah, bullshit. These notices never give the useful details. They hide them. Corporations don’t want you to be aware of how they are going to fuck you over more in the future.

The fix seems simple: we have a tool that once per month fetches the terms of service and privacy policies for all the suppliers we have a relationship with. The tool could extract the text and check it into a local git repo. Another tool could diff the different versions and feed that into an AI program that tells you in plain English what changes. It could even add a bit of character and say “Next month we’re going fuck you more by increasing penalties for late payments and shortening the grace period”.

It would also be useful if the AI would input the whole privacy policy and produce a Cliff’s Notes extraction of what’s important. It could take care to detect weasel wording and give the honest meaning (like when the policy says “we only share your personal data when legally permitted”, which really means “we pawn your ass to the full extent legally possible”.

Another nice to have feature: you feed it the privacy policies of 10 different banks, and it compares them and produces a detailed report that ranks them on the extent of the privacy abuses.

 

Suppose you are about to travel to some unfamiliar city, perhaps abroad. You don’t want to just show up without being informed or you might overlook some great restaurant or bar. In principle it would be useful to have an app that visits the websites of all (or select) restaurants in your destination city before you go. It could harvest all the PDF menus.

The train or whatever mode of transport may not have wi-fi, but having an offline collection of PDFs can be a good way to get informed offline and decide where to go.

If such an app would exist, restaurant owners would be encouraged to post PDF versions of their menus on the web.

The list of websites could be grabbed from OSM. Restaurants likely have to be licensed in some way by the gov or hygiene regulator, which could also be a source for website URLs (not sure).

 

cross-posted from: https://libretechni.ca/post/309317

There are probably thousands of LaTeX packages many of which are riddled with bugs and limitations. All these packages have an inherent need to interoperate and to be used together unlike any other software. Yet there are countless bizarre incompabilities. There are various situations where two different font packages cannot be used in the same document because of avoidable name clashes. If multiple different packages use a color package with different options, errors are triggered about clashing options when all the user did was simply use two unrelated packages.

Every user must do a dance with all these unknown bugs. Becoming proficient with LaTeX entails an exercise of working around bugs. Often the sequence of \usepackage makes the difference between compilation and failure, and the user must guess about which packages to reorder.

So there is a strong need for a robust comprehensive bug tracking system. Many of the packages have no bug tracker whatsoever. Many of those may even be unmaintained code. Every package developer uses the bug tracker of their choice (if they bother), which is often Microsoft Github’s walled garden of exclusion.

Debian has a disaster of its own w.r.t LaTeX

Debian bundles up the whole massive monolithic collection of LaTeX packages into a few texlive-* packages. If you find a bug in a pkg like csquotes, which maps to texlive-latex-extra and you report a bug in the Debian bug tracker for that package, the Debian maintainer is driven up the wall because one person has 100s/1000s of pkgs to manage.

It’s an interesting disaster because the Debian project has the very good principle that all bugs be reportable and transparent. Testers are guided to report bugs in the Debian bug tracker, not upstream. It’s the Debian pkg manager’s job to forward bugs upstream as needed. Rightly so, but there is also a reasonable live-and-let-live culture that tolerates volunteer maintainers using their own management style. So some will instruct users to directly file bugs upstream.

Apart from LaTeX, it’s a bit shitty because users should not be exposed to MS’s walled garden which amounts to bug supression. But I can also appreciate the LaTeX maintainer’s problem.. it’d be virtually humanly unsurmountable for a Debian maintainer to take on such a workload.

What’s needed

  • Each developer of course needs control of their choice of git and bug tracker, however discriminatory the choice is -- even if they choose to have no bug tracker at all.
  • Every user and tester needs a non-discriminatory non-controversial resource to report bugs on any and all LaTeX packages. They should not be forced to lick Microsoft’s boots (if MS even allows them).
  • Multiple trackers need a single point of review, so everyone can read bug reports in a single place.

Nothing exists that can do that. We need a quasi-federation of bug trackers giving multiple places to write bug reports and a centralised resource for reviewing bug reports. Even if a package is abandoned by a maintainer, it’s still useful for users to report bugs and discuss workarounds (in fact, more importantly so).

The LaTeX community needs to solve this problem. And when they do, it could solve problems for all FOSS not just LaTeX.

(why this is posted to !foss_requests@libretechni.ca: even though a whole infrastructure is needed, existing FOSS does not seem to satisfy it. Gitea is insufficient.)

 

There are probably thousands of LaTeX packages many of which are riddled with bugs and limitations. All these packages have an inherent need to interoperate and to be used together unlike any other software. Yet there are countless bizarre incompabilities. There are various situations where two different font packages cannot be used in the same document because of avoidable name clashes. If multiple different packages use a color package with different options, errors are triggered about clashing options when all the user did was simply use two unrelated packages.

Every user must do a dance with all these unknown bugs. Becoming proficient with LaTeX entails an exercise of working around bugs. Often the sequence of \usepackage makes the difference between compilation and failure, and the user must guess about which packages to reorder.

So there is a strong need for a robust comprehensive bug tracking system. Many of the packages have no bug tracker whatsoever. Many of those may even be unmaintained code. Every package developer uses the bug tracker of their choice (if they bother), which is often Microsoft Github’s walled garden of exclusion.

Debian has a disaster of its own w.r.t LaTeX

Debian bundles up the whole massive monolithic collection of LaTeX packages into a few texlive-* packages. If you find a bug in a pkg like csquotes, which maps to texlive-latex-extra and you report a bug in the Debian bug tracker for that package, the Debian maintainer is driven up the wall because one person has 100s/1000s of pkgs to manage.

It’s an interesting disaster because the Debian project has the very good principle that all bugs be reportable and transparent. Testers are guided to report bugs in the Debian bug tracker, not upstream. It’s the Debian pkg manager’s job to forward bugs upstream as needed. Rightly so, but there is also a reasonable live-and-let-live culture that tolerates volunteer maintainers using their own management style. So some will instruct users to directly file bugs upstream.

Apart from LaTeX, it’s a bit shitty because users should not be exposed to MS’s walled garden which amounts to bug supression. But I can also appreciate the LaTeX maintainer’s problem.. it’d be virtually humanly unsurmountable for a Debian maintainer to take on such a workload.

What’s needed

  • Each developer of course needs control of their choice of git and bug tracker, however discriminatory the choice is -- even if they choose to have no bug tracker at all.
  • Every user and tester needs a non-discriminatory non-controversial resource to report bugs on any and all LaTeX packages. They should not be forced to lick Microsoft’s boots (if MS even allows them).
  • Multiple trackers need a single point of review, so everyone can read bug reports in a single place.

Nothing exists that can do that. We need a quasi-federation of bug trackers giving multiple places to write bug reports and a centralised resource for reviewing bug reports. Even if a package is abandoned by a maintainer, it’s still useful for users to report bugs and discuss workarounds (in fact, more importantly so).

The LaTeX community needs to solve this problem. And when they do, it could solve problems for all FOSS not just LaTeX.

(why this is posted to !foss_requests@libretechni.ca: even though a whole infrastructure is needed, existing FOSS does not seem to satisfy it. Gitea is insufficient.)

 

cross-posted from: https://libretechni.ca/post/302171

The websites of trains, planes, buses, and ride shares have become bot-hostile and also tor-hostile. This forces us to make a manual labor-intensive effort of pointing and clicking through shitty proprietary GUIs. We cannot simply query for the cheapest trip over a span of time for specified parameters of our choice. We typically must also search one day per query.

Suppose I want to go to Paris, Lyon, Lille, or Marseilles, and I can leave any morning in the next 2 weeks. Finding the cheapest ticket requires 56 manual web queries (4 destinations × 14 days). And that’s for just one carrier. If I want to query both Flixbus and BlaBlaCar, we’re talking 112 queries. Then I have to keep notes - a shortlist of prospective tickets. Fuck me. Why do people tolerate this? (They probably just search less and take a suboptimal deal).

If we write web scraping software, the websites bogart their inventory with anti-bot protectionist mechanisms that would blacklist your IP address. Thereafter, we would not even be able to do manual searches. So of course a bot would have to run over Tor or a VPN. But those IPs are generally blocked outright anyway.

The solution: MitM software

We need some browser-independent middleware that collects the data and shares it. Ideally it would work like a special purpose socat command. It would have to do the TLS handshake with the travel site and offer a local unencrypted port for the GUI browser to connect to. That would be a generic tool comparable to Wireshark (or perhaps #Wireshark can even serve this purpose?) Then a site-specific program could monitor the traffic, parse it, and populate a local SQLite DB. Another tool could sync the local DB with a centralised cloud DB. A fourth tool could provide a UI to the DB that gives us the queries we need.

A browser extension that monitors and shares would be an alternative solution -- but not as good. It would impose a particular browser. And it would be impossible to make the connection to the central DB over Tor while making the browser connection over a different network.

Fares often change daily, so the DB would of course timestamp fares. Perhaps an AI mechanism could approximate the price based on past pricing trends for a particular route. A Flixbus fare will start at 10 but climb to 40 on the day of travel. Stale price quotes would obviously be inexact but when the DB shows an interesting price and you search it manually, the DBs would be updated. The route and schedule info would of course be quite useful (and unlikely stale).

The end result would be an Amadeus DB of sorts, but with the inclusion of environmentally sound ground transport. It could give a direct comparison and perhaps even cause air travelers to switch to ground travel. It could even give us a Matrix ITA Software UI/query tool that’s more broad.

 

cross-posted from: https://libretechni.ca/post/302171

The websites of trains, planes, buses, and ride shares have become bot-hostile and also tor-hostile. This forces us to make a manual labor-intensive effort of pointing and clicking through shitty proprietary GUIs. We cannot simply query for the cheapest trip over a span of time for specified parameters of our choice. We typically must also search one day per query.

Suppose I want to go to Paris, Lyon, Lille, or Marseilles, and I can leave any morning in the next 2 weeks. Finding the cheapest ticket requires 56 manual web queries (4 destinations × 14 days). And that’s for just one carrier. If I want to query both Flixbus and BlaBlaCar, we’re talking 112 queries. Then I have to keep notes - a shortlist of prospective tickets. Fuck me. Why do people tolerate this? (They probably just search less and take a suboptimal deal).

If we write web scraping software, the websites bogart their inventory with anti-bot protectionist mechanisms that would blacklist your IP address. Thereafter, we would not even be able to do manual searches. So of course a bot would have to run over Tor or a VPN. But those IPs are generally blocked outright anyway.

The solution: MitM software

We need some browser-independent middleware that collects the data and shares it. Ideally it would work like a special purpose socat command. It would have to do the TLS handshake with the travel site and offer a local unencrypted port for the GUI browser to connect to. That would be a generic tool comparable to Wireshark (or perhaps #Wireshark can even serve this purpose?) Then a site-specific program could monitor the traffic, parse it, and populate a local SQLite DB. Another tool could sync the local DB with a centralised cloud DB. A fourth tool could provide a UI to the DB that gives us the queries we need.

A browser extension that monitors and shares would be an alternative solution -- but not as good. It would impose a particular browser. And it would be impossible to make the connection to the central DB over Tor while making the browser connection over a different network.

Fares often change daily, so the DB would of course timestamp fares. Perhaps an AI mechanism could approximate the price based on past pricing trends for a particular route. A Flixbus fare will start at 10 but climb to 40 on the day of travel. Stale price quotes would obviously be inexact but when the DB shows an interesting price and you search it manually, the DBs would be updated. The route and schedule info would of course be quite useful (and unlikely stale).

The end result would be an Amadeus DB of sorts, but with the inclusion of environmentally sound ground transport. It could give a direct comparison and perhaps even cause air travelers to switch to ground travel. It could even give us a Matrix ITA Software UI/query tool that’s more broad.

 

The websites of trains, planes, buses, and ride shares have become bot-hostile and also tor-hostile. This forces us to make a manual labor-intensive effort of pointing and clicking through shitty proprietary GUIs. We cannot simply query for the cheapest trip over a span of time for specified parameters of our choice. We typically must also search one day per query.

Suppose I want to go to Paris, Lyon, Lille, or Marseilles, and I can leave any morning in the next 2 weeks. Finding the cheapest ticket requires 56 manual web queries (4 destinations × 14 days). And that’s for just one carrier. If I want to query both Flixbus and BlaBlaCar, we’re talking 112 queries. Then I have to keep notes - a shortlist of prospective tickets. Fuck me. Why do people tolerate this? (They probably just search less and take a suboptimal deal).

If we write web scraping software, the websites bogart their inventory with anti-bot protectionist mechanisms that would blacklist your IP address. Thereafter, we would not even be able to do manual searches. So of course a bot would have to run over Tor or a VPN. But those IPs are generally blocked outright anyway.

The solution: MitM software

We need some browser-independent middleware that collects the data and shares it. Ideally it would work like a special purpose socat command. It would have to do the TLS handshake with the travel site and offer a local unencrypted port for the GUI browser to connect to. That would be a generic tool comparable to Wireshark (or perhaps #Wireshark can even serve this purpose?) Then a site-specific program could monitor the traffic, parse it, and populate a local SQLite DB. Another tool could sync the local DB with a centralised cloud DB. A fourth tool could provide a UI to the DB that gives us the queries we need.

A browser extension that monitors and shares would be an alternative solution -- but not as good. It would impose a particular browser. And it would be impossible to make the connection to the central DB over Tor while making the browser connection over a different network.

Fares often change daily, so the DB would of course timestamp fares. Perhaps an AI mechanism could approximate the price based on past pricing trends for a particular route. A Flixbus fare will start at 10 but climb to 40 on the day of travel. Stale price quotes would obviously be inexact but when the DB shows an interesting price and you search it manually, the DBs would be updated. The route and schedule info would of course be quite useful regardless (and unlikely stale).

The end result would be an Amadeus DB of sorts, but with the inclusion of environmentally sound ground transport. It could give a direct comparison and perhaps even cause air travelers to switch to ground travel. It could even give us a Matrix ITA Software UI/query tool that’s more broad.