TeX typesetting
A place to share ideas, resources, tips, and hacks for Donald Knuths typesetting software TeX. All variants and formats like OpTeX, LaTeX and ConTeXt are welcome.
Suppose you’re writing an anonymous letter. Nice looking LaTeX fonts would be a bad choice because they stand out and create quite a bit of uniqueness. I figure MS Word is probably the most popular. So I had a look at the wordlike package. It’s dated 2006 and gives an error on this line:
\renewcommand{\@dotsep}{1}
To hack around it, I tried putting this in my preamble:
\makeatletter
\newcommand{\@dotsep}{1} % hack to avoid wordlike.sty error
\makeatother
That attempt at a hack has no effect. Any ideas?
Regarding the click bait title.. I have not yet had the need for making ransom demands which should probably use a genuine MS Word. But whistle blowing should be quasi-pseudo-anonymous to some extent. I thought wordlike would suffice. Of course I’m open to other approaches. Maybe just switching to a sans serif font would do.
The last answer on this page looks interesting but does not work with pdflatex.. only XeTex. There is another non-wordlike approach on this page I might play with.
A long-ass time ago I had a big heavy laser printer that was well documented. It only had a parallel (LPT) port (to give an idea of the age). The documentation gave various control codes that could be sent to the printer. I vaguely recall sending plain text to the port and controlling things like font size using the control codes that were specified in the printer manual. I suppose that was a driver-free mode of operation.
Some LaTeX doc talks about how to produce a DVI file with printer control codes inserted wherever you want. So imagine if you have a cover letter followed by a document you intend to enclose with the letter. You would not generally want the first page of the document to print on the backside of the cover letter, but you might still want the doc to use full duplex mode. In principle, you could have the lp command send it in simplex mode but inject a control character that switches to duplex mode after the first page.
Of course you can inject a deliberately blank page but that’s sloppy. The digital version should have no blanks and the printed version should have blanks in certain places. The \cleartooddpage command is good for the latter but the former. I suppose the caveat is PDFs are disadvantaged and likely cannot handle printer control signals the way DVI can.
Printer manuals apparently no longer acknowledge the existence of control codes. So have we lost a capability because manufacturers insist on dumbing everything down for the stupid masses?
What about driverless printers? The CUPS docs mention that CUPS will become driverless. I really hope that does not mean CUPS is going to obsolete my current driver-dependent printer. But in any case, does driverless imply that there will be a standard for controlling printers, so e.g. we can send a signal mid-printjob to switch to full duplex?
Anyone know of a template or sample doc that prints markers around the edge of an A4 paper?
Or even just a good centralised reference?
I can’t believe what shit results my searches are getting. Surely this must be a common need for millions of people. I am not going to go to the printshop, write down their printer model numbers, try to locate online manuals in an ocean of shitty manual sites, to try to dig up the printable area specs, which are likely untrustworthy anyway. I’ve done that before, and IIRC Canon specs were a lie.
Canons seem to have a quite large unprintable area. I know Ricoh does better. It would be useful to see a centralised table with the printable area specs of (at least) all the large industrial printers.
\documentclass[DIV=66, draft=true]{scrartcl} % The draft switch produces a ruler along the boundary of the printed space (which is controlled by the DIV value)
Update1: CUPS test print reveals unprintable area dimensions
It’s worth noting that the test page for CUPS gives “media limits” info. Which is vague but seems to correspond with the printer’s edge of printable area. It’s unclear if that comes from the printer driver or if the printer is somehow queried for that info.
This is of course only useful if you’re not using a print shop.
Update2: came up with code to generate a test print:
% Purpose:
%
% 1) Test whether the unprintable region documented in the printer specs is accurate.
% 2) If not, find the real dimensions.
% 3) Find the maximum DIV setting for the KOMAscript package that does not encroach into the unprintable area.
%
% Procedure:
%
% 1) Lookup the expected unprintable area dimensions for the printer under test.
% 2) Edit SetBgContents below to match the dimensions, which are added to (current page.*)
% 3) Trial and error/tuning: Set DIV=99 and compile. Then set DIV=9 and compile. Notice how the rectangle ruler gets smaller as DIV gets smaller. Find the max value for which the rectangle does not go outside of the violet rectangle.
% 4) With DIV at the max, fiddle with the size and position parameters of the large circle (in DeclareNewLayer). The goal is for the circle to touch the top and bottom edges of the paper.
\usepackage{scraddr}
\usepackage{scrlayer-scrpage} % needed for \cofoot
\usepackage[utf8]{inputenc}
\usepackage[T1]{fontenc} % suggested to avoid ``OT1 encoding''
\usepackage{pict2e}
\usepackage{scrlayer}
\usepackage[firstpage=true, color=violet]{background}
\usepackage{tikz}
\usetikzlibrary{calc}
% from another suggestion below:
\SetBgPosition{current page.north west}% Select location
\SetBgOpacity{1.0}
\SetBgAngle{0.0}
\SetBgScale{1.0}
% \SetBgColor{black}
% The line width setting below specifies 1pt but it really looks thicker compared to other lines. Nonetheless, it gives a good thickness for the job.
\SetBgContents{%
\begin{tikzpicture}[overlay,remember picture]
\draw [line width=1pt]%,rounded corners=4pt,]
($ (current page.north west) + (4.2mm,-4.2mm) + (1pt,-1pt) $)
rectangle
($ (current page.south east) + (-4.2mm,4.2mm) + (-1pt,1pt) $);
\end{tikzpicture}}
% The following gives circles and must /follow/ the tikz stuff above.
\DeclareNewLayer[%
textarea,background,mode=picture,
contents={%
\putC{\circle{\LenToUnit{\paperwidth}}}%
\put(0.5\layerwidth,0.5\layerheight-3pt){\circle{\LenToUnit{\paperheight}-0pt}}%
}
]{showtextarea}
\DeclareNewPageStyleByLayers{test}{showtextarea}
\pagestyle{test}
\begin{document}
\phantom0 % There must be /something/ here or else 0 pages are generated. So we put an invisible phantom object.
\end{document}
cross-posted from: https://libretechni.ca/post/309317
There are probably thousands of LaTeX packages many of which are riddled with bugs and limitations. All these packages have an inherent need to interoperate and to be used together unlike any other software. Yet there are countless bizarre incompabilities. There are various situations where two different font packages cannot be used in the same document because of avoidable name clashes. If multiple different packages use a color package with different options, errors are triggered about clashing options when all the user did was simply use two unrelated packages.
Every user must do a dance with all these unknown bugs. Becoming proficient with LaTeX entails an exercise of working around bugs. Often the sequence of
\usepackagemakes the difference between compilation and failure, and the user must guess about which packages to reorder.So there is a strong need for a robust comprehensive bug tracking system. Many of the packages have no bug tracker whatsoever. Many of those may even be unmaintained code. Every package developer uses the bug tracker of their choice (if they bother), which is often Microsoft Github’s walled garden of exclusion.
Debian has a disaster of its own w.r.t LaTeX
Debian bundles up the whole massive monolithic collection of LaTeX packages into a few texlive-* packages. If you find a bug in a pkg like
csquotes, which maps totexlive-latex-extraand you report a bug in the Debian bug tracker for that package, the Debian maintainer is driven up the wall because one person has 100s/1000s of pkgs to manage.It’s an interesting disaster because the Debian project has the very good principle that all bugs be reportable and transparent. Testers are guided to report bugs in the Debian bug tracker, not upstream. It’s the Debian pkg manager’s job to forward bugs upstream as needed. Rightly so, but there is also a reasonable live-and-let-live culture that tolerates volunteer maintainers using their own management style. So some will instruct users to directly file bugs upstream.
Apart from LaTeX, it’s a bit shitty because users should not be exposed to MS’s walled garden which amounts to bug supression. But I can also appreciate the LaTeX maintainer’s problem.. it’d be virtually humanly unsurmountable for a Debian maintainer to take on such a workload.
What’s needed
- Each developer of course needs control of their choice of git and bug tracker, however discriminatory the choice is -- even if they choose to have no bug tracker at all.
- Every user and tester needs a non-discriminatory non-controversial resource to report bugs on any and all LaTeX packages. They should not be forced to lick Microsoft’s boots (if MS even allows them).
- Multiple trackers need a single point of review, so everyone can read bug reports in a single place.
Nothing exists that can do that. We need a quasi-federation of bug trackers giving multiple places to write bug reports and a centralised resource for reviewing bug reports. Even if a package is abandoned by a maintainer, it’s still useful for users to report bugs and discuss workarounds (in fact, more importantly so).
The LaTeX community needs to solve this problem. And when they do, it could solve problems for all FOSS not just LaTeX.
(why this is posted to !foss_requests@libretechni.ca: even though a whole infrastructure is needed, existing FOSS does not seem to satisfy it. Gitea is insufficient.)
Getting burnt by repair-hostile makers of washing machines who refuse to share documentation inspired this form letter (in LaTeX):
\documentclass[DIV=16]{scrlttr2}
%\LoadLetterOption{NF} % uncomment for French standard windowed envelope
%\LoadLetterOption{DIN} % uncomment for German standard windowed envelope
%\LoadLetterOption{UScommercial9DW} % uncomment for US standard double-windowed envelope
\usepackage{ragged2e} % needed to restore the loss of paragraph indents when \raggedright is used
\usepackage{hyperref}
\setlength{\RaggedRightParindent}{\parindent} % restore the loss of paragraph indents when \raggedright is used
\RaggedRight
\newcommand{\appliance}{washing machine} % replace with whatever you need to buy
\newcommand{\mfr}{Machine Maker} % replace with Whirlpool, or whatever
\newcommand{\mfrAddress}{123 sesame street\\90210} % replace with mfr address
\begin{letter}{%
\mfr\\
\mfrAddress}
\opening{Dear \mfr,}
I am in the market for a \appliance.
When I asked the local retailer (whose profession is to sell your products)
which \mfr\ models include service manuals, they were helpless.
Could not find a single machine that respects consumers and thus their right to repair.
Zero. Every product by \mfr\ in their showroom was anti-consumer.
There are no service manuals published on your website either.
When looking at various second-hand models, many basic user guides were missing as well,
apparently depending on the age of the unit.
I will not buy a disposable anti-consumer \appliance.
Those are for stupid consumers.
A \emph{\bfseries good} \appliance\ meets this criteria:
\begin{enumerate}
\item has a \emph{good} service manual which is available to anyone, free of charge
\item has no cloud-dependency (\emph{all} functionality accessible without Internet)
\item has no app, OR has a \emph{good} app
\end{enumerate}
A \emph{good} app satisfies this criteria:
\begin{itemize}
\item open source
\item requires no patronisation of Google or Apple to obtain
\item has an APK file directly on your website or on f-droid.org
\end{itemize}
A \emph{good} service manual meets this criteria:
\begin{itemize}
\item wiring diagram
\item parts diagram with part numbers
\item inventory of components including the manuafacturers and models, and functional resistence ranges (Ω)
\item error codes and their meanings
\item steps to reach diagnostic mode and steps to use it
\end{itemize}
Do you make any \emph{good} pro-consumer \appliance s with a good service manual, with no bad apps?
If yes, please send me the service manual and I will take your product seriously.
If not, you are sure to lose the competition.
If everyone else loses the competition as well, then I will continue washing my clothes by hand
-- perhaps with this repairable machine: \url{www.thewashingmachineproject.org}.
\closing{Sincerely,}
\end{letter}
I suggest sending that letter to every manufacturer making machines for your region. It will get no results but it will send the message they don’t hear enough of.
I often have a long itemised list of words that and short phrases that wastes a lot of page space. E.g.
- short
- short
- short
- short
- short
- short
- short
- short
- something a bit longer
- something a bit longer
- something a bit longer
- something a bit longer
- something a bit longer
- something a bit longer
- something a bit longer
- something a bit longer
Natural temptation is to code it like this:
\begin{multicols}{3}
\begin{itemize}[nosep,noitemsep,left=6pt] % nosep helps but still a little wasteful. We try noitemsep in vain.. has no effect
\item short
\item short
\item short
\item short
\item short
\item short
\item short
\item short
\item something a bit longer
\item something a bit longer
\item something a bit longer
\item something a bit longer
\item something a bit longer
\item something a bit longer
\item something a bit longer
\item something a bit longer
\end{itemize}
\end{multicols}
The multicols is claimed to be “balanced” but apparently that does not mean a balance of white space. It’s hard-coded to give each column an even share of width (⅓ of \linewidth). So it produces:
- short - short - something a bi
- short - short - something a bi
- short - short - something a bi
- short - something a bit l- something a bi
- short - something a bit l- something a bi
It’s even worse than that because the right column actually overlaps the center column and the right column runs off the page. All while a lot of wasted space is on the left column.
I doubt I can expect a pkg to be smart about max item widths for each column. But in principle I should be able to manually micromanage this and say make the left col “5em” wide, or something. The parcolumns package gives that control, but it botches itemised lists even more and would force me to break the list into 3 separate lists because it’s designed to give you control over where the column breaks happen.
The enumitem package suggests something along the lines of:
\SetEnumitemKey{threecol}{
itemsep = 1\itemsep,
parsep = 1\parsep,
before = \raggedcolumns\begin{multicols}{3},
after
= \end{multicols}}
(in effect) but same problem.
AFAICT, I will have to use parcolumns with norulebetween (because the vertical line is what screws up), and also make separate lists.. which is a bit sloppy code-wise.
I would expect this to be a common problem on CVs because you need to cram a lot of info in a small space and you would be listing skills which might involve many short words (C, C++, Java, Rust, Ada, Go, ..) along with some longer words (Fortran, Haskell, Cobalt, Pascal).
This thread suggests the vwcol package, but it’s even worse. The code:
\begin{vwcol}[widths={0.2,0.4,0.4}]
Yields a first column that uses the full width. I still have to try the flowfram and paracol packages.
Update
The paracol package seems like the best compromise. The layout gives control over a \columnratio variable. It will not decide for you where to end a column and start a new one. That can be a nuissance in some cases but welcome in others. In fact, it’s not much different than just coding a minipage for each column. Demo:
\documentclass{article} % minimal not used because the itemize environment lacks bullets
\usepackage[margin=6mm]{geometry}
\usepackage{paracol}
\usepackage{enumitem}
\usepackage{lipsum}
\usepackage{blindtext}
\begin{document}
\setlist{nosep} % nix whitespace from all lists globally (to affect the \blindlist in particular)
\columnratio{0.2,0.26}
\begin{paracol}{3}
\blindlist{itemize}[6]
\switchcolumn % start column 2
\begin{itemize}[noitemsep,left=6pt]
\item medium width item list
\item medium width item list
\item medium width item list
\item medium width item list
\end{itemize}
\switchcolumn % start column 3
\begin{itemize}[noitemsep,left=6pt]
\item \lipsum[1][1]
\item \lipsum[1][1]
\item \lipsum[1][1]
\item \lipsum[1][1]
\end{itemize}
\end{paracol}
The array package gives the capability to replicate a declaration before every cell in a specifed column column. So if you have a column for a price, you can use a format like: {ll>{\$}r} to put a dollar sign before every price.
Usually you want long dates in the text of a document (e.g. 5th August, 2021). Of course exceptionally you would usually want short dates in a table column. But this is broken:
\Begin{tabular}{l>{\DTMsetdatestyle{default}\DTMsetup{datesep=/}}rl}
col 1 && \DTMusedate{mydate} && col 3\\
\end{tabular}
The middle column still prints the long form of dates. It obviously clusterfucks the table to put the long-ass \DTMsetdatestyle{default}\DTMsetup{datesep=/} in every middle cell.
The only workaround I can think of is to nest the whole table inside braces ({}), and nest the date config with it:
{\DTMsetdatestyle{default}\DTMsetup{datesep=/}
\begin{tabular}{lrl}
col 1 && \DTMusedate{mydate} && col 3\\
\end{tabular}}
I’m calling it a bug because there is no good reason make code inside >{} happen in a separate scope as the cell that preceeds.
I want to produce a PDF that looks good on the screen in color. Of course if I do that well with color backgrounds and all, the same document will look lousy when printed on a monochrome laser printer. E.g. consider a text box with color background. The background will go through a dithering algorithm which often enshitifies the text layer on top of that. Likewise on mono e-readers.
In principle, the doc needs two different representations. One for color and one for mono. As rich as the PDF standard is, I don’t think I have ever seen a PDF with multiple modes. So LaTeX aside, does the PDF standard even support this?
I can think of a hack using PDF layers which is supported by the ocgx2 LaTeX package. Color backgrounds could be isolated to a switchable layer. This is not great though because the end user needs to be aware of the layer and must take a manual action to turn off the background layer before printing as black and white. And still, non-black foreground text will print as gray unless foreground text is in a layer too (yikes).
Am I S.O.L?
URLs can be long and ugly as fuck, littered with special LaTeX-reserved characters like “#”, “_”, “%”, “&”, “@”, “~”, …etc.
The hyperref package apparently does some sophisticated gymnastics to handle the special chars. The \url{} and \href{}{} macros work for most (if not all) chars. But it becomes a shit-show when the same URL is used in multiple places in multiple representations. E.g. I often need to have a hyperlink as a readable string that visits an URL when clicked. \href from the hyperref pkg does that. But of course the URL is lost when the doc is printed. So the URL needs to become a footnote, which means the shitty-looking ungodly long URL must be entered twice. For example, a first attempt might look like this:
The \href{https://lemmy.sdf.org/c/tex_typesetting}{TeX community}\footnote{\scriptsize\verb|https://lemmy.sdf.org/c/tex_typesetting|} is where we discuss `\LaTeX`.
There is an underscore, which probably has to be escaped in the \verb but not the \href (not sure ATM- it’s hard to keep track of all the exceptions). Of course it’s quite annoying that the URL appears twice. So the temptation to thwart redundancy leads to this:
\newcommand{\hardlink}[2]{\href{#1}{#2}\footnote{\scriptsize#1}}
Disaster, because some chars need to be escaped for the footnote but if you escape those same chars for the \href, the backslashes appear literally in the hyperlink which is broken as a result. Another problem is when using minipages to get two columns, the footnote width is cut in half thus forcing long URLs to wrap at the horizontal midpoint of the page instead of continuing and using the wasted footer space under the right column. \mbox fixes that. So after much fiddling with blind hacks like \urlescape, \noexpandarg, \normalexpandarg, and \expandafter, I arrived at this:
\newcommand{\hardlink}[2]{\href{#1}{#2}\footnote{\scriptsize\mbox{\detokenize{#1}}}}
The \detokenize works for some special chars but not others. So still a fuckin’ mess. A LaTeX wizard of sorts went off to work on this problem for me, and came up with this:
\makeatletter
% NOTE: The following is an ugly hack that temporary redefines an internal
% command of hyperref to process the verbatim URL. There is no warranty
% and no support for this code or documents using this code!
\newcommand*{\footnotelink}{%
\global\let\original@hyper@@link\hyper@@link
\let\hyper@@link\onetime@special@hyper@@link% ugly hack
\href
}
\newcommand*{\onetime@special@hyper@@link}[3]{%
\global\let\hyper@@link\original@hyper@@link% ugly hack
\hyper@@link{#1}{#2}{#3}%
\IfArgIsEmpty{#2}{\footnote{\tiny\nolinkurl{#1}}}{\footnote{\tiny\nolinkurl{#1\##2}}}%
}
\makeatother
That monstrosity is the nuclear option that works in most cases. But IIRC it still fucks up in some situations, so I must use a combination of \hardlink and \footnotelink.
But what about QR codes? Fuck me. Another dimension of the same problem. Producing a doc with QR codes but not the URL strips the reader of some dignity. But a footnote is a bad way to expand a barcode. The URL should appear close to the QR code so the reader need not hunt for it. But URL size and circumstances ensure we cannot simply make a macro that hard codes it. Every layout situation is different.
Having a bibliography section helps force a standard presentation, but that still requires the URL to be repeated and we don’t necessarily want to be forced to have a bibliography anyway.
What we really need is an URL database, which maps tokens to URLs in the preamble. Consider how the datetime2 pkg works. You can store a list of dates like this:
\DTMsavedate{event1}{2021-12-10}
\DTMsavedate{event2}{2022-02-21}
\DTMsavedate{event3}{2022-03-10}
…
\begin{document}
yada yada \DTMusedate{event1} yada yada \DTMsetstyle{ddmmyyyy}\DTMusedate{event3}…
lorem ipsum \DTMsetstyle{mmddyyyy}\DTMusedate{event2} lorem ipsum \DTMsetdatestyle{default}\DTMsetup{datesep=/}\DTMusedate{event1} …etc.
We need that for URLs. Simply making a \newcommand for each URL would not work because \qrcode, \href, \texttt, \verb and family of verbatim envs all treat the special chars differently and some do not even expand commands. It needs to be a macro that can probe its own user to know which chars to escape.
One of the markdown languages supports URL references. E.g. you can declare ergonomic names for the URLs:
[diseasePlusCure]: https://krebsonsecurity.com/2016/10/spreading-the-ddos-disease-and-selling-the-cure
[diseasePlusCure-ia]: http://web.archive.org/web/20230713212522/krebsonsecurity.com/2016/10/spreading-the-ddos-disease-and-selling-the-cure
[mislabelling-ia]: <http://web.archive.org/web/20211006120915/people.torproject.org/~lunar/20160331-CloudFlare_Fact_Sheet.pdf#page=3>
[fediThreat]: https://write.pixie.town/thufie/dont-trust-cloudflare
[fediThreat-ia]: http://web.archive.org/web/20230827161847/write.pixie.town/thufie/dont-trust-cloudflare
[testamony]: https://dragonscave.space/@BlindMoon38/111954315299607397
[personalisedPricing]: https://web.archive.org/web/20240601161454/http://robindev.substack.com/p/cloudflare-took-down-our-website
Then in the doc write: “Cloudflare exploits [personalised pricing][personalisedPricing]” so the shitty URL does not obnoxiously pollute the text.
A LaTeX approach could be:
\savelink{CloudflareLies}{http://web.archive.org/web/20211006120915/https://people.torproject.org/~lunar/20160331-CloudFlare_Fact_Sheet.pdf#page=3}
…
\begin{document}
Yada yada.. This QR: \uselink[form=qr, width=20mm]{CloudflareLies} leads to \uselink[form=fixedwidthfont,wrapping=false]{CloudflareLies}.
We have something that partially works and is not well documented. There is an url package and there is a hyperref package. The hyperref package is said to supercede the url pkg. In fact, hyperref uses the url pkg. So you would generally ignore the url pkg and just use hyperref. But the docs for hyperref conceal the existence of the \urldef command from the url pkg. The docs for url show that you can do this:
\urldef{\nastyurl}\url{http://www.musical-starstreams.tld/~william@orbit/very_long_using_underscores/and%20spaces/file+with^caret.pdf?arg1=x&arg2=y#page=5}
Then you can use \nastyurl throughout your doc, including inside footnotes. But it screws up when used inside \href (no wonder hyperref docs neglect to mention it). It also falls over when used inside \qrcode.
My fight against enshitification entails hand-delivering paper correspondence by bicycle. I also re-use the windowed envelopes that I receive. When I run out those, I leave the envelope unsealed so the recipient can easily reuse the new envelope.
The recipient generally must use the postal service to reply. And rightly so. Penalizing them with the cost of printing and posting serves as a means to punish them for the enshitified digital path they offer. For me, this approach sufficiently casts my anti-enshitification votes while supporting the postal service that gives refuge from enshitification, without excessive environmental detriment.
The LaTeX scrlttr2 class is useful for using and re-using windowed envelopes. If the envelope is standard, the geometry may be known to the supplied KOMAscript machinery. If not, a few measurements can be given as parameters to align an address in a custom window.
To load the US №9 standard envelope, you would start with:
\documentclass[UScommercial9]{scrlttr2}
or for the French standard:
\documentclass[NF]{scrlttr2}
If you reuse a non-standard windowed envelope, you can put the following in the preamble and tamper with the measurements as needed:
\makeatletter
\setplength{foldmarkhpos}{4.2mm} % default=3.5mm; distance from paper edge to fold mark; should account for the unprintable area of your printer
\setplength{tfoldmarkvpos}{108mm} % default=99mm; distance between top fold mark and top paper edge
\setplength{firstheadwidth}{190mm} % default=170mm for NF and \paperwidth for others; width of letterhead
\setplength{firstheadvpos}{10mm} % default=15mm for NF; distance from top edge to letterhead
\setplength{toaddrvpos}{40mm} % default=35mm; distance between top of window and top paper edge
\setplength{toaddrhpos}{98mm} % default=-10mm; distance from the left edge of the paper to the address field (if positive)
\setplength{toaddrindent}{5mm} % default=10mm; left and right indentation of the address within the to-address box
\setplength{toaddrheight}{40mm} % default=45mm
\makeatother
I’ve created a very customized LaTeX document which contains portions of machine translated text. I will ask a native speaker to make the text proper. I’m not sure who will take on the task yet, but it’s unlikely to be someone who understands LaTeX. My large preamble would add to the intimidation.
I think overleaf would have normally been ideal. But it became restricted access and hostile toward Tor users a few years ago. I cancel oppressive platforms like that.
One idea is to host it on some arbitary gitea server. They can probably edit the text directly in the web browser of they are a low-tech user. Or if tech-proficient they can use git as it was designed. Doesn’t matter if they butcher the code.. I’ll deal with the cleanup. I guess my main concern is that they would be so alienated by the code that it would put them off this volunteer effort.
Pandoc was one thought: pandoc -o paper.md -f latex -t markdown paper.tex, in which case they would work in a less alien situation. But pandoc can’t even handle my 2-column doc. It falls over on a tabular and produces nothing. But even if it could produce results, I’d expect disaster anyway.
Probably no great answers here.
High-level EU courts apparently assume all those who read their acronym-littered opinions and judgements are Subject Matter Experts (SMEs) who already know what the acronyms stand for.
I’m not a lawyer but this seems sloppy from a legal standpoint because an acronym that is never expanded is ambiguous. It creates room for confusion and misinterpretation in the worst case, and in the very least wastes the reader’s time on investigation.
Have lawyers and judges not been trained on this? As a technologist, my training included the good practice of expanding every single acronym the first time it appears, as I did above with “SME”, as well as the extra diligent but optional practice of including a section at the end with all expansions.
I realise that the whole legal industry is made up of mostly tech illiterates. Geeks have the advantage of being able to use LaTeX with the acro package¹, which enables us to write acronyms without thinking about where it first appears because the software automatically expands the first occurrances (or as we specify). Legal workers have probably limited themselves to dumbed down tools like MS Word which probably does not automate this, but nonetheless it’s the writer’s duty to see that acronym expansion happens.
Abbreviations:
SME: Subject Matter Expert
¹ In LaTeX, the preamble would have \DeclareAcronym{sme}{short=SME, long=Subject Matter Expert} and throughout the document each instance would be written as \ac{sme}.
It’s disturbing that infosec illiterate friends enter my name and contact details into their Android phonebooks, which then gets recklessly shared in countless ways outside of my control and without my knowledge or consent as to which data abusers ultimately get my contact info.
I try to practice data minimisation even with friends (if they are new), so I don’t give them an email address; generally just my first name, XMPP acct, and phone number. But then of course they enter my name into their dodgy phonebook along with my last name if they happen to get it circumstantially.
So I have a fix of sorts. We can have some control over how the info gets entered into people’s phonebooks by using a vCard. One option is to leave your name blank on the vCard but to graphically put your name in the avatar image on your vCard. OTOH, users will likely manually fill your name in anyway. So consider using the name field but deviating from normal text. You can find some obscure unicode fonts at yaytext.com. Then follow this LaTeX template to generate a contact card:
LaTeX code
\documentclass{minimal}
\usepackage[paper=a4paper,layoutwidth=210mm,layoutheight=297mm]{geometry}
\usepackage[newdimens]{labels}% let the package do the work...
\usepackage{qrcode}
% These attributes are for European label sheet OLW4738
\LabelCols=3
\LabelRows=7
\LeftPageMargin=0in
\RightPageMargin=0in
\TopPageMargin=0in
\BottomPageMargin=0in
\InterLabelColumn=0mm% adjust as required
\InterLabelRow=0mm
\RightLabelBorder=0mm% adjust to taste
\LeftLabelBorder=0mm
\TopLabelBorder=2mm
\BottomLabelBorder=2mm
\LabelGridtrue % <== use to line stuff up; delete this line to process final version
\numberoflabels=12 % ← normally this is 21 to fill a page (3×7), but due to memory overflow bug w/too many QR codes, it must be reduced!
\begin{document}
\genericlabel{%
\begin{minipage}{66mm}% actual label is 70mm wide; subtract \RightLabelBorder and \LeftLabelBorder
\hspace*{4mm}%
\qrcode[height=22mm, level=l]{BEGIN:VCARD\?
VERSION:4.0\?
N:刀囗モ;╝ǫⱨᶇ;;;\?
IMPP:xmpp:johnsnickname@jd.snikket.chat\?
TEL;VALUE=uri;TYPE="cell":tel:+①-𝟝𝟝𝟝-𝟝𝟝𝟝-①²①²\?
LANG:en\?
END:VCARD
}%
\parbox[c]{8em}{%
snkt fingrprint $\rightarrow$\\
\vfill
$\leftarrow$ Vcard4\\
\vfill
dino fingerprint $\rightarrow$
}
\parbox[c]{11mm}{
\qrcode[height=11mm, level=l]{xmpp:johndoe@jd.snikket.chat?omemo-sid-1234567890=a9a9dc175fbdebad99db71f72396a1e7a9a9dc175fbdebad99db71f72396a1}\\
\vfill
\qrcode[height=11mm, level=l]{xmpp:johndoe@jd.snikket.chat?omemo-sid-1234567890=75fbdebad99db71f72396a1e7a9a9dc175fb1e7a9a9dcfbdebad99db71f723}
}
\end{minipage}
}
\end{document}
It’s not infallible but it’s unlikely that enough people would be doing this to justify Google coding their identity cross referencing logic to decode atypical characters.
It’s not trivial to get a good font. A lot of the yaytext fonts are simple font changes so when the QR code is scanned, the phone seems to automatically convert the font back to normal characters. Unfortunately this means you need to carefully select a non-font style that is being abused as a font, which then leads your name to look like a ransom letter style.
Kids can use cool nicknames w/out a real name to mitigate the problem to some extent, especially if they’re a hipster drug dealer, but it’s harder for an adult to pull that off without alienating people and coming off as a kid. We need to at least try to pretend to behave like adults.
It would be nice if there were a desktop app that could give all the yaytext.com styles and a bit more of the obscure ones. There is some python code in this thread but it’s quite limited in fonts. It’s missing the good ransom letter fonts.
(I tried to cross-post to cybersecurity@infosec.pub but this post triggers the slur filter there so I could not post it.)
LaTeX is great for writing letters. It seems like a little known secret how well the scrlttr2 class formats letters for windowed envelopes. LaTeX really makes letter writing enjoyable for programmers (though it would likely be hell for non-programmers).
If I were using a WYSIWYG tool like Libre Office, writing letters would be mundane, boring, and tedious. And the results would be aesthetically limited without doing copious manual labor.
There is noteworthy gratification in turning letter writing into a programming exercise. So whenever a gov agency or corporation fucks me over in some way, I find it theraputic to write complaints and petitions in LaTeX.
There is an hacktivist mantra that goes something something like this:
“write code not text” (not sure on the exact wording)
LaTeX basically turns that on its side because you do both at the same time. I have built up a library of captioned legal statutes in LaTeX, such as commonly referenced GDPR law. So I can crank out GDPR requests quite quickly by using \input statements that imports a very nicely formatted block quote of law which I have thoroughly over-engineered. Also fun to use the qrcode package to reference URLs.
The perfectionism probably consumes more time than using a shit tool like MS Word in the end. But it’s enjoyable. And because it’s enjoyable, it triggers writing more petitions and complaints that I would otherwise write. Every time I get fucked over by some administrative malpractice, it’s another fun opportunity to play in LaTeX and refine my code.
I was disappointed to see that the qrcode package gives no way to insert an image into the center of the QR code. But in fact it turns out that QR codes cannot be made to have an alternate center. QR codes are simply spec’d to have 30% redundancy. So you can simply overwrite up to 30% of a QR code arbitrarily and it will still decode as long as you don’t mess with the boxes on the 4 corners.
Also worth noting that you can exceed 30% interference if you play games with colors. That is, if a transparent pic uses sufficiently light colors that pass as white (in a black vs white dithering algo), then those pixels obviously don’t count in the 30% tolerance. So some quite clever work could exploit this to make a QR code look less like a pixel blob.
I guess the gripe that I have is that redundancy is fixed at 30% for all QR codes, IIUC.
In principle, we should be able to generate a code with 50% redundancy and then clobber up to 50% of it.
It’s a slightly labor intensive because for each line of text you have to specify an endpoint.. but it’s managable enough. Also worth considering is Inkscape, which has a function to flow text into a shape.
It would be fun to collect some templates for re-use. E.g. if someone wants to complain about the corrupt tyrant who just took power (the most powerful office in the world) a couple days ago, a middle finger would be appropriate for that sort of thing.
Suppose you feed a multi-page PDF into \includepdf. If you only want pagecommand or picturecommand to take effect on some pages, you normally must split the construct up into multiple invocations. E.g.
\includepdf[pages=1], pagecommand={\doStuffOnPageOne}]{file.pdf}
\includepdf[pages=2-], pagecommand={\doStuffOnPagesAfterOne}]{file.pdf}
It gets ugly fast when there are some commands you want performed on every page, and some on select pages, because then you must write and maintain redundant code.
Fuck that. So here’s a demonstration of how to write code inside pagecommand and picturecommand that is page-specific:
\documentclass{article}
% Demonstrates use of the pdfpages package with page-by-page actions that are specific to select pages.
\usepackage{mwe} % furnishes example-image-a4-numbered.pdf
\usepackage{pdfpages}
\usepackage{pdfcomment}
\usepackage{ocgx2} % furnishes the ocg environment (PDF layers, but not really needed for this demo)
\usepackage{fancybox} % furnishes oval box
\usepackage{fontawesome} % furnishes \faWarning
\begin{document}
\makeatletter
\includepdf[pages=1-,pagecommand={%
% \makeatletter ← ⚠ does not work in this scope; must wrap the whole includepdf construct
\pdfcomment[icon=Note, hoffset=0.5\textwidth, voffset=5em]{%
(inside pagecommand, executing on every page)\textLF\textLF
\texttt{\textbackslash AM@page} variable: \AM@page\textLF\textLF
Side-note: the voffset option has no effect when the value is positive (5em in this case)%
}%
\ifthenelse{\AM@page = 1 \OR \AM@page = 2 \OR \AM@page = 12}{%
\pdfcomment[icon=Insert, hoffset=0.5\textwidth, voffset=-6em]{%
(inside pagecommand, affecting only pages 1, 2, and 12)\textLF\textLF
\texttt{\textbackslash AM@page} variable: \AM@page\textLF\textLF
Strangely, the voffset option only works if it is negative.%
}%
}{}
% \makeatother
}, picturecommand={%
\put(50,715){Inside the picture environment:}
\put(50,700){%
\begin{tabular}[t]{llp{0.6\textwidth}}
internal \texttt{\textbackslash @tempcnta} variable (useless): &\the\@tempcnta&\\
internal \texttt{\textbackslash @tempcntb} variable (useless): &\the\@tempcntb&\\
internal \texttt{\textbackslash AM@pagecnt} variable (useless): &\the\AM@pagecnt&\\
internal \texttt{\textbackslash AM@abs@page} variable (useless): &\AM@abs@page&\\
internal \texttt{\textbackslash AM@page} variable (interesting): &\AM@page & \faWarning Inside picturecommand, this number is 1 higher than the actual page number! But it’s correct inside pagecommand (see the annotation note to check).\\
internal \texttt{\textbackslash AM@pagecount} variable (interesting): &\AM@pagecount&\\%
\end{tabular}
% lastpage: \AM@lastpage% broken
\ifAM@firstpage
We might expect this to trigger on the 1st page, but it never does. Likely because the page counter is incremented before picturecommand is invoked. It would perhaps work in the pagecommand construct.
\fi
}
\put(500,770){% The ocg environment is irrelevant and unnecessary.. just here to demo PDF layers.
\begin{ocg}{section labels}{sl1}{on}\color{blue}
\Large\rotatebox{-45}{\setlength{\fboxsep}{6pt}\Ovalbox{Section~A}}
\end{ocg}}}]%
{example-image-a4-numbered.pdf}
\makeatother
\end{document}
It would sometimes be useful to write conditional code that depends on boolean values defined in a parent package. E.g. the \pdfcomment package has the boolean “final”, which disables all PDF annotations in the document (\usepackage[final]{pdfcomment}). There is some other logic in my document that should also be disabled when that boolean is true. I tried simply using:
\ifpc@gopt@final\else%
…code that should not run when final is true…
\fi
pdflatex gives: “Undefined control sequence”
More generally, many draft options are often useful for controlling logic within the document for which a parent uses a draft option. Also when defining a custom letterhead in the scrlttr2 class there are booleans for many items that may or may not be wanted in the letterhead.
Has anyone managed to read a parent boolean?
(update) This thread gives useful options for many situations. But it does not completely answer the question because there are non-draft related booleans.
SOLVED
The \ifpc@gopt@final is reachable but only inside a \makeatletter stanza. Thus:
\makeatletter
\ifpc@gopt@final\else%
…code that should not run when final is true…
\fi
\makeatother
Some might find it useful to import a text file and put the contents into a PDF annotation. E.g. a PDF is in language A and you want to make a translation available in language B, in a PDF annotation.
Here’s how:
\documentclass{article}
\usepackage{mwe}
\usepackage{pdfpages}
\usepackage{pdfcomment}
\usepackage{newfile}
\usepackage{xstring}
\usepackage{catchfile}
% heredoc holding text that normally breaks the \pdfcomment command:
\begin{filecontents*}{\jobname_sample.txt}
line one
line two
tricky symbols: _&%
\end{filecontents*}
% normally the above file is whatever you supply to be imported into the PDF annotation. The heredoc is just to provide a self-contained sample.
% Create \pdfcommentfile, which is a version of \pdfcomment that can read from a file:
\makeatletter
\gdef\pdfcommentfile#1{%
\begingroup
\everyeof{\noexpand}%
\long\edef\temp{\noexpand\pdfcomment{\@@input{#1}}}%
\temp
\endgroup
}%
\makeatother
\CatchFileDef{\cfile}{\jobname_sample.txt}{} % side-effects: replaces blank lines with “\par” and drops percent symbols
% Replace blank lines with \textLF and replace special symbols with those that are safe for \pdfcomment. Warning: this is probably not a complete list of all constructs that break \pdfcomment!
\StrSubstitute{\cfile}{\par}{\string\noexpand\string\textLF\ }[\pdfannotationtxt] % the hard space is after textLF is a bit unfortunate; not sure how to do a normal space there
\StrSubstitute{\pdfannotationtxt}{\%}{\string\noexpand\string\%}[\pdfannotationtxt]
\StrSubstitute{\pdfannotationtxt}{_}{\string\noexpand\string\_}[\pdfannotationtxt]
\StrSubstitute{\pdfannotationtxt}{&}{\string\noexpand\string\&}[\pdfannotationtxt]
% the \pdfcomment command cannot directly handle the above substitutions (nor can it handle the original unsubstituted version). So we write the new version to another file:
\newoutputstream{filteredresult}
\openoutputfile{\jobname_filtered.txt}{filteredresult}
\addtostream{filteredresult}{\pdfannotationtxt}
\closeoutputstream{filteredresult}
\begin{document}
\pdfcommentfile{\jobname_filtered.txt}
\includepdf{example-image-a.pdf}
\end{document}
There should be a way to substitute the special characters and blank lines then feed it directly to \pdfcomment, but I’ve exhausted that effort. I’ve been in this LaTeX rabbit hole for days now trying to do something that should be simple. So this is as far as I go. The code above works but it’s ugly as fuck that we have to write the filtered text to file then read the file back in. The file i/o slows down compilation much more than what I consider reasonable.
cross-posted from: https://linkage.ds8.zone/post/363360
I am trying to do some simple character replacements on an input file and writing it to a file. The output produced by \StrSubstitute is quite bizarre. Here is a MWE:
\documentclass{article} \usepackage{newfile} % furnishes \newoutputstream \usepackage{catchfile} % furnishes \CatchFileDef \usepackage{xstring} % furnishes \StrSubstitute \usepackage{stringstrings}% furnishes \convertword (a \StrSubstitute alternative) % heredoc that creates source input file \begin{filecontents*}{\jobname_sample.txt} line one line two tricky symbols: _&% \end{filecontents*} \CatchFileDef{\cfile}{\jobname_sample.txt}{} \begin{document} % Replacements needed: % & → \& % % → \% % _ → \_ % \newline\newline → \textLF (replace blank lines) % \StrSubstitute{\cfile}{&}{\&}[\mystring] \StrSubstitute{\mystring}{\%}{\%}[\mystring] \StrSubstitute{\mystring}{_}{\_}[\mystring] \StrSubstitute{\mystring}{\newline\newline}{\\textLF}[\mystring] \newwrite\myoutput \immediate\openout\myoutput=\jobname_filtered_native.txt \immediate\write\myoutput{\mystring} \immediate\closeout\myoutput \newoutputstream{filtered} \openoutputfile{\jobname_filtered_newfile.txt}{filtered} \addtostream{filtered}{\mystring} \closeoutputstream{filtered} \noindent\textbf{filtered catchfile}:\\ \mystring \noindent\textbf{filtered catchfile (2nd attempt)}:\\ \convertword{\mystring}{\newline\newline}{\noexpand\textLF} \end{document}That uses two different techniques to write to a file, and both give slightly different yet wildly unexpected output:
$ cat sample_code_filtered_native.txt line one \par line two tricky symbols: \protect \global \let \OT1\textunderscore \unhbox \voidb@x \kern .06em\vbox {\hrule width.3em}\OT1\textunderscore \& $ cat sample_code_filtered_newfile.txt line one \par line two tricky symbols: \global\let \OT1\textunderscore \unhbox \voidb@x \kern .06em\vbox {\hrule width.3em}\OT1\textunderscore \&What triggered all that garbage to be created? This is what the output [b]should[/b] be:
line one\textLF
line two
tricky symbols: _&%I also tried a 3rd way to write \mystring to a file, as follows:
\begin{filecontents*}{\jobname_myvar.txt} \mystring \end{filecontents*}That approach literally writes the string “\mystring” to a file, which is useless in this case.
(update) apparently a
\stringneeds to prefix the substituted strings.
Took me longer than I'd like to admit to realize that \directlua is first expanded before it goes into the lua interpreter, and that \% is defined through \chardef (in plain), which means that it's not expandable.
Luckily LuaTeX has the \csstring primitive.
Is anyone else doing any fun things with \directlua?
I’m just getting into LaTeX and am starting with a project I’ve cloned from GitHub. I immediately ran into problems compiling because of a bunch of missing packages. I was able to get it running by compiling, seeing where it failed, and installing the missing package, but I had to do this one at a time for over a dozen packages. Is there any sort of requirements.txt or package.json file that lists all dependencies so I can pipe them to the package manager to install?
What's your method for dealing with underfull/overfull \hboxes and unacceptable badness in general?
LaTeX has the \sloppy command which IIRC sets \tolerance to 9999, and \emergencystretch to a large value. But the default \tolerance is 200 (I think), which is a big difference. It's very "either/or" when maybe there's a more optimal way.
For my native language (swedish) I've found that many issues arise because TeX doesn't find all the possible hyphenation points, so I usually spend time adding words to the hyphenation list.
But still, in any longer text there's usually a couple of paragraphs that just won't set right, I'm curious about your tricks or methods for dealing with them.
ConTeXt has a nice font selection system. LaTeX ported the ConTeXt code in luaotfload.sty and they have fontspec, OpTeX has a font selection system with font files and all. In plain, there's only the primitive font switches. There are some packages on ctan for plain to extend functionality, but I'm not sure how they work.
The good news is, you can use luaotfload.sty directly in plain! Just \input luaotfload.sty. The bad news if you're into minimalism is that it depends on LaTeX so you'll need that installed. An alternative is to use luafonts.tex from the csplain package: \input luafonts.tex, it uses the luaotfload code too.
Once you've done that, you can use all the nice things in luaotfload.
In this example I'll use an updated version of Tuftes Bembo-clone, ETbb. You can put the files anywhere where luaotfload will find them, ~/.fonts or your projects directory for example.
There are many ways to implement font selection. I rarely use many fonts in a project, so I usually just do something simple like this:
\font\tenrm "ETbb-Regular" at 10pt
\font\tenit "ETbb-Italic" at 10pt
\font\tenbf "ETbb-Bold" at 10pt
\font\tenbi "ETbb-BoldItalic" at 10pt
\font\tensc "ETbb-Regular":+smcp;letterspace=10; at 10pt
\font\tencaps "ETbb-Regular":+upper;letterspace=10; at 10pt
The opentype features come after the name, with a + or - to turn them on or off. To make it a little more semantic I add a size macro, and why not set \baselineskip at the same time. You could also set struts here.
\def\normalsize{% 10pt
\baselineskip=12pt
\def\rm{\tenrm}%
\def\it{\tenit}%
\def\bf{\tenbf}%
\def\bi{\tenbi}%
\def\sc{\tensc}%
\def\caps{\tencaps}%
}
Now I can type \normalsize\rm and the default will be 10pt roman. \it will switch to italic, \sc to small caps, etc. I have two special switches for small caps and big caps because I always want them letterspaced and maybe some opentype features too.
With the same structure, it's (repetitive) but easy to add a \footnotesize say, 8pt, and a \largesize at 12pt.

In regular writing the macros could work something like this:
\normalsize\rm % default for document
\centerline{\largesize\caps Title}
\vskip\baselineskip
Lorem ipsum {\it dolor} sit amet, consectetur {\bf adipiscing} elit. Integer non {\bi accumsan} sem. Vestibulum ante {\sc ipsum} primis in faucibus orci luctus et ultrices posuere cubilia curae; Morbi blandit in nisl sed dapibus. Praesent porttitor id mauris sit amet tincidunt.
\vskip\baselineskip
{\footnotesize\rm
Lorem ipsum {\it dolor} sit amet, consectetur {\bf adipiscing} elit. Integer non {\bi accumsan} sem. Vestibulum ante {\sc ipsum} primis in faucibus orci luctus et ultrices posuere cubilia curae; Morbi blandit in nisl sed dapibus. Praesent porttitor id mauris sit amet tincidunt.\par
}
which gives:

This is a very primitive and simple way, and would probably become tedious if you're using lots of different fonts, then it would be better to use/make a more advanced system. There's a programming paradigm called "Worse is better", and I'm not sure if this is an example of that. Maybe it's just "Worse is worse". But, it's easy to understand all the moving parts, which can be a good thing.
The full code:
%\input luaotfload.sty
\input luafonts.tex
\hsize=65mm
\frenchspacing
\tolerance=1000
\font\eightrm "ETbb-Regular" at 8pt
\font\eightit "ETbb-Italic" at 8pt
\font\eightbf "ETbb-Bold" at 8pt
\font\eightbi "ETbb-BoldItalic" at 8pt
\font\eightsc "ETbb-Regular":+smcp;letterspace=10; at 8pt
\font\eightcaps "ETbb-Regular":+upper;letterspace=10; at 8pt
\font\tenrm "ETbb-Regular" at 10pt
\font\tenit "ETbb-Italic" at 10pt
\font\tenbf "ETbb-Bold" at 10pt
\font\tenbi "ETbb-BoldItalic" at 10pt
\font\tensc "ETbb-Regular":+smcp;letterspace=10; at 10pt
\font\tencaps "ETbb-Regular":+upper;letterspace=10; at 10pt
\font\twelverm "ETbb-Regular" at 12pt
\font\twelveit "ETbb-Italic" at 12pt
\font\twelvebf "ETbb-Bold" at 12pt
\font\twelvebi "ETbb-BoldItalic" at 12pt
\font\twelvesc "ETbb-Regular":+smcp;letterspace=10; at 12pt
\font\twelvecaps "ETbb-Regular":+upper;letterspace=10; at 12pt
\def\footnotesize{% 8pt
\baselineskip=10pt
\def\rm{\eightrm}%
\def\it{\eightit}%
\def\bf{\eightbf}%
\def\bi{\eightbi}%
\def\sc{\eightsc}%
\def\caps{\eightcaps}%
}
\def\normalsize{% 10pt
\baselineskip=12pt
\def\rm{\tenrm}%
\def\it{\tenit}%
\def\bf{\tenbf}%
\def\bi{\tenbi}%
\def\sc{\tensc}%
\def\caps{\tencaps}%
}
\def\largesize{% 12pt
\baselineskip=14pt
\def\rm{\twelverm}%
\def\it{\twelveit}%
\def\bf{\twelvebf}%
\def\bi{\twelvebi}%
\def\sc{\twelvesc}%
\def\caps{\twelvecaps}%
}
{\footnotesize\rm footnotesize rm}\par
{\footnotesize\it footnotesize it}\par
{\footnotesize\bf footnotesize bf}\par
{\footnotesize\bi footnotesize bi}\par
{\footnotesize\sc footnotesize sc}\par
{\footnotesize\caps footnotesize caps}\par
\vskip\baselineskip
{\normalsize\rm normalsize rm}\par
{\normalsize\it normalsize it}\par
{\normalsize\bf normalsize bf}\par
{\normalsize\bi normalsize bi}\par
{\normalsize\sc normalsize sc}\par
{\normalsize\caps normalsize caps}\par
\vskip\baselineskip
{\largesize\rm largesize rm}\par
{\largesize\it largesize it}\par
{\largesize\bf largesize bf}\par
{\largesize\bi largesize bi}\par
{\largesize\sc largesize sc}\par
{\largesize\caps largesize caps}\par
\vskip\baselineskip
\hrule
\vskip\baselineskip
\normalsize\rm
\centerline{\largesize\caps Title}
\vskip\baselineskip
Lorem ipsum {\it dolor} sit amet, consectetur {\bf adipiscing} elit. Integer non {\bi accumsan} sem. Vestibulum ante {\sc ipsum} primis in faucibus orci luctus et ultrices posuere cubilia curae; Morbi blandit in nisl sed dapibus. Praesent porttitor id mauris sit amet tincidunt.
\vskip\baselineskip
{\footnotesize\rm
Lorem ipsum {\it dolor} sit amet, consectetur {\bf adipiscing} elit. Integer non {\bi accumsan} sem. Vestibulum ante {\sc ipsum} primis in faucibus orci luctus et ultrices posuere cubilia curae; Morbi blandit in nisl sed dapibus. Praesent porttitor id mauris sit amet tincidunt.\par
}
\bye