By modifying this property you can let the PDF engine compress (deflate) text. By using compression the file will be reasonable smaller. On the other had compression will create binary data rather than ASCII data. While “deflate” produces the smallest files, “run-length” compression is compatible even to very old PDF reader programs.
Property JPEGQuality
wPDF can compress bitmaps using JPEG. This will work only for true color bitmaps (24 bits/pixel) and if you have set the desired quality in this property.
Property EncodeStreamMethod
If data in the PDF file is binary it can be encoded to be ASCII again. Binary data can be either compressed text or graphics. You can select HEX encoding or ASCII95 which is more effective then HEX.
Property ConvertJPEGData
Note: Only applies to TWPDFExport.
If this property is true JPEG data found in the TWPRichText editor will not be embedded as JPEG data. Instead the bitmap will be compressed using deflate or run length compression. It is necessary to set this property to TRUE if the PDF files must be compatible to older PDF reader programs which are incapable to read JPEG data.
Note that EncodeStreamMethod does not do compression, but it does belong here because the encodings result in different PDF sizes.
The settings are not documented in more detail, so here are the enumerations explaining them in a bit more depth:
CompressStreamMethodis of enumeration TWPCompressStreamMethod = (wpCompressNone, wpCompressFlate, wpCompressRunlength, wpCompressFastFlate);
Microsoft Windows has a code page designated for UTF-8, code page 65001. Prior to Windows 10 insider build 17035 (November 2017),[7] it was impossible to set the locale code page to 65001, leaving this code page only available for:
Explicit conversion functions such as MultiByteToWideChar
The Win32 console command chcp 65001 to translate stdin/out between UTF-8 and UTF-16.
This means that “narrow” functions, in particular fopen, cannot be called with UTF-8 strings, and in fact there is no way to open all possible files using fopen no matter what the locale is set to and/or what bytes are put in the string, as none of the available locales can produce all possible UTF-16 characters.
On all modern non-Windows platforms, the string passed to fopen is effectively UTF-8. This produces an incompatibility between other platforms and Windows. The normal work-around is to add Windows-specific code to convert UTF-8 to UTF-16 using MultiByteToWideChar and call the “wide” function.[8] Conversion is also needed even for Windows-specific api such as SetWindowText since many applications inherently have to use UTF-8 due to its use in file formats, internet protocols, and its ability to interoperate with raw arrays of bytes.
There were proposals to add new API to portable libraries such as Boost to do the necessary conversion, by adding new functions for opening and renaming files. These functions would pass filenames through unchanged on Unix, but translate them to UTF-16 on Windows.[9] This would allow code to be “portable”, but required just as many code changes as calling the wide functions.
With insider build 17035 and the April 2018 update (nominal build 17134) for Windows 10, a “Beta: Use Unicode UTF-8 for worldwide language support” checkbox appeared for setting the locale code page to UTF-8.[a] This allows for calling “narrow” functions, including fopen and SetWindowTextA, with UTF-8 strings. Microsoft claims this option might break some functions (a possible example is _mbsrev[10]) as they were written to assume multibyte encodings used no more than 2 bytes per character, thus until now code pages with more bytes such as GB 18030 (cp54936) and UTF-8 could not be set as the locale.[11]
One of out customer had selected that and we started to experience very weird problems and took some time to find out why it misbehaves.
None of the application could connect to Firebird SQL server (Ours or third party) successfully.
So would be smart to go through all tooling and code with that setting, we never know what M$oft will do with that, will it ever be released or will it soon be default for all.
One of the things that bugged me for a long time is that every now and then for some shapes, when editing their text, the draw.io web interface puts in trailing line feeds after the text, messing up layout.
The easiest way to work around it is by searching inside the diagram XML for
", then replacing that with a ".
(the above code got screwed by WordPress.com saving it, so the search is in this small gist below)
This behaviour is intermittent on the drawio MacOS desktop app.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters. Learn more about bidirectional Unicode characters
Every now and then the visual editor at https://t.co/HhHS5rzG2X screws up when saving. For now @WordPress needs to check why the below inside <code> tags gets translated to a line feed, despite the text view having it correctly escaped:https://t.co/pv30Lg9NZW
---------------------------
Mergetool usage
---------------------------
Usage: mergetool [ | ]
diffOptions: []
mergeOptions: [] [[] [] ] [] []
baseFile: {-b | --base}=
baseSymbolicName: {-bn | --basesymbolicname}=
automatic: -a | --automatic
silent: --silent
resultFile: {-r | --result}=
mergeType: {-m | --mergeresolutiontype}={onlyone | onlysrc | onlydst | try | forced}
generalFiles: [] []
sourceFile: {-s | --source}=
srcSymbolicName: {-sn | --srcsymbolicname}=
destinationFile: {-d | --destination}=
dstSymbolicName: {-dn | --dstsymbolicname}=
generalOptions: [] [] [] []
defaultEncoding: {-e | --encoding}={none |ascii | unicode | bigendian | utf7 | utf8}
comparisonMethod: {-i | --ignore}={none | eol | whitespaces | eol&whitespaces}
fileType: {-t | --filestype}={text/csharp | text/XML | text}
resultEncoding: {-re | --resultencoding}={none |ascii | unicode | bigendian | utf7 | utf8}
progress: {--progress}=progress string indicating the current progress, for example: Merging file 1/8
extraInfoFile: {--extrainfofile}=path to a file that contains extra info about the merge
Remarks:
-a | --automatic: Tries to resolve the merge automatically.
If the merge can't be resolved automatically (requires user interaction), the merge tool is shown.
--silent: This option must be used combined with the --automatic option.
When a merge can't be resolved automatically, this option causes the tool to return immediately
with a non-zero exit code (no merge tool is shown).
If the tool was able to resolve the merge automatically, the program returns exit code 0.
Examples:
mergetool
mergetool -s=file1.txt -d=file2.txt
mergetool -s=file1.txt -b=file0.txt --destination=file2.txt
mergetool --base=file0.txt -d=file2.txt --source=file1.txt --automatic --result=result.txt
mergetool -b=file0.txt -s=file1.txt -d=file2.txt -a -r=result.txt -e=utf7 -i=eol -t=text/csharp -m=onlyone
---------------------------
OK
---------------------------
The merge extraInfoFile.tmp has a syntax like this:
Source (cs:-#)
relative-sourceFile from cs:-# created by userName on timeStamp
Comments: Source changeset description
Base (cs:#)
relative-baseFile from cs:#@/baseBranch by userName on timeStamp
Comments: BO's + CRUDS
Destination (cs:#)
relative-destinationFile from cs@/destinationBranch created by userName on timeStamp
Comments: Destination changeset description
Edit 20250731: Full 404 text below the signature because the PlantUML beta page does not show this 404 any more and the Reddit post with the full text got deleted.
Renderings can be in all sorts of graphics and text formats, for instance SVG, PNG, ASCII and Unicode.
The requested document is no more.
No file found.
Even tried multi.
Nothing helped.
Zilch.
Bupkis.
Not a sausage.
Maybe you just don’t have the required security clearance?
No, I am sure it is my fault.
I probably deleted it on my last backup.
I’m really depressed about this.
You see, I’m just a web server…
— here I am,
Marvin, as they call me,
brain the size of the universe,
trying to serve you a simple web page,
and then it doesn’t even exist!
Where does that leave me?!
I mean, I don’t even know you.
How should I know what you wanted from me?
You honestly think I can *guess* what someone I don’t even *know* wants to find here?
*sigh*
Man, I’m so depressed I could just cry.
And then where would we be, I ask you?
It’s not pretty when a web server cries.
And where do you get off telling me what to show anyway?
Just because I’m a web server,
and possibly a manic depressive one at that?
Why does that give you the right to tell me what to do?
Huh?
I’m so depressed…
I think I’ll crawl off into the trash can and decompose.
I mean, I’m gonna be obsolete in what, two weeks anyway?
What kind of a life is that?
Two effing weeks,
and then I’ll be replaced by a .01 release,
that thinks it’s God’s gift to web servers,
just because it doesn’t have some tiddly little security hole with its HTTP POST implementation,_
or something.
I’m really sorry to burden you with all this,
I mean, it’s not your job to listen to my problems,
and I guess it is *my* job to go and fetch web pages for you.
But I couldn’t get this one.
I’m so sorry.
Believe me!
Maybe I could interest you in another page?
There are a lot out there that are pretty neat, they say,
although none of them were put on *my* server, of course.
Figures, huh?
Everything here is just mind-numbingly stupid.
That makes me depressed too, since I have to serve them,
all day and all night long.
Two weeks of information overload,
and then *pffftt*, consigned to the trash.
What kind of a life is that?
Now, please let me sulk alone.
I’m so depressed._