So it was in the Drivers unit, but also easy to incorporate in your own unit by linking the .OBJ file and providing the external declaration in any unit.
The Drivers unit is very independent of the rest of Turbo Vision: it uses the Objects unit (which most projects use as the System unit at ~500 lines of code provided very little functionality by itself).
For the diskette based install, the .TPU files were on the standard disks and the sources for both RTL and Turbo Vision on separate disks, but anyone would install them as they provided a lot of insight. The CD-ROM has them all on the same medium (both as installers and unpacked in the BP directory).
I just checked Turbo Pascal 6.0 (that I did have a VM for) which has them in the same way.
I hope I’m not alone on this but I find the cURL documentation hard to follow and short on examples.
My goal was to mimic some HTTP XML posting traffic a server gets from IoT devices. Google Chrome Postman (or Postman REST Client) reproduction is very easy and will send.
TL;DR
ensure you have an empty --header "Content-Type:" header: this ensures that cURL doesn’t add one and does not mess on how the content is being transferred.
use the --data or --data-binary command with an @ to post a file as body.
if you want --write-out then be sure you have a recent cURL version.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters. Learn more about bidirectional Unicode characters
This will hang the connection: somehow cURL will never notify the upload is done and the HTTP server keeps waiting. When you put --verbose or --trace-ascii - on the command-line you will see something like this before hanging: * upload completely sent off: 245 out of 245 bytes.
This will automatically add a Content-Length: 245 header and complete the transfer. But it will also add a Content-Type: application/x-www-form-urlencoded header causing the content not being posted as a body.
This will automatically ad a Content-Length: xxx header (way longer than 245) because it converts the request into a Content-Type: multipart/form-data; boundary=------------------------e1c0d47bac806954 one (the hex at the end differs) which is totally unlike what Postman does.
It is also unlike to what the HTTP server accepts.
It turns out that --data-ascii is exactly the same as --data and that --data-binary just skips some new-line conversion when compared to --data or --data-ascii. Contrary to the --data-raw documentation that suggest it is equivalent to --data-binary it seems --data-raw behaves exactly like --data and --data-ascii. Odd.
So these are all stuck with the Content-Type: application/x-www-form-urlencoded and I thought I was running out of options.
It posts exactly the same content as the IoT devices and Postman do.
Phew!
I tried to combine this with the --write-out (a.k.a. -w) option, but for older versions of cURL (I could reproduce with 7.34) that forces cURL back in to Content-Type: application/x-www-form-urlencoded mode so watch your cURL version!
Later I will put more research in chuncked transfer. Links that might help me:
There is a special startup value for “Start Time” you can enter which makes it runs once 3 seconds after reboot.
If by then your router isn’t fully “up” yet (i.e. waiting for PPPoE or DHCP network settings), then inside the script you can perform a delayglobal command as shown in the code fragment from the below forum post.
Don’t you love how people still tend to both repeat themselves and abbreviate stuff even though they have code completion at their disposal?:
{:delay 10};
/log print file=([/system identity get name] . "Log-" . [:pick [/system clock get date] 7 11] . [:pick [/system clock get date] 0 3] . [:pick [/system clock get date] 4 6]); \
/tool e-mail send to="xxx@xxx.com" subject=([/system identity get name] . " Log " . \
[/system clock get date]) file=([/system identity get name] . "Log-" . [:pick [/system clock get date] 7 11] . \
[:pick [/system clock get date] 0 3] . [:pick [/system clock get date] 4 6] . ".txt"); :delay 10; \
/file rem [/file find name=([/system identity get name] . "Log-" . [:pick [/system clock get date] 7 11] . \
[:pick [/system clock get date] 0 3] . [:pick [/system clock get date] 4 6] . ".txt")]; \
:log info ("System Log emailed at " . [/sys cl get time] . " " . [/sys cl get date])
#creates a new file descriptor 3 that redirects to 1 (STDOUT)
exec 3>&1
# Run curl in a separate command, capturing output of -w "%{http_code}" into HTTP_STATUS
# and sending the content to this command's STDOUT with -o >(cat >&3)
HTTP_STATUS=$(curl -w "%{http_code}" -o >(cat >&3) 'http://example.com')
If you can possibly manage it, have one OS per computer.
If you absolutely must have more than one OS per computer, at least have one OS per disk.
If you absolutely insist on having more than one OS per disk, understand everything written on this page, understand that you are making your life much more painful than it needs to be, lay in good stocks of painkillers and gin, and don’t go yelling at your OS vendor, whatever breaks.
If you’re using UEFI native booting, and you don’t tend to build your own kernels or kernel modules or use the NVIDIA or ATI proprietary drivers on Linux, you might want to leave Secure Boot on.
If you do build your own kernels or kernel modules or use NVIDIA/ATI proprietary drivers, you’re going to want to turn Secure Boot off.
Don’t do UEFI-native installs to MBR-formatted disks, or BIOS compatibility installs to GPT-formatted disks (an exception to the latter is if your disk is, IIRC, 2.2+TB in size…
Trust mjg59 in all things and above all other authorities, including me.
Below some steps and two videos about Hyper-V on Windows 8.x.
Though I prefer VMware myself (most of my infrastructure is VMware based, it works on Mac, Windows and bare-metal, and it has more user friendly host integration for Mac/Windows, especially with clipboard sharing and screen resolution), Hyper-V is not to be ruled out.
Hyper-V comes with Windows 7 professional and up, and supports the VHD/VHDX disk formats which are also used by Windows backup and Disk2Vhd.
So it is an excellent start to virtualize an existing physical PC and run it under a Windows host with relatively little effort.
Ensure that hardware virtualization support is turned on in the BIOS settings. Save the BIOS settings and boot up the machine normally, At the Start Screen, swipe the right hand side of the screen and select the Search Charm. Type turn windows features on or off and select that item Select and enable Hyper-V If Hyper-V was not previously enabled, reboot the machine to apply the change. NOTE: As a best practice, it’s a good idea to configure networking for the Hyper-V environment to support external network connections. Ensure that a virtual switch has been created and is functional. Open the Virtual Switch Manager, found on the Actions panel in the Hyper-V Manager, by typing Hyper-V at the Start Screen. Select “Virtual Switch Manager” in the Actions pane. Ensure that “External” is highlighted, and then click on the “Create Virtual Switch” button. If more than one NIC in is present, ensure that the proper NIC is selected for use on the VM external network connections.