For nearly a year now, painting has been a challenge. Around this time last year, I began working on the piece featured in my previous post. There isn’t any glaring issue preventing me from painting; rather, it’s been a struggle to find the inspiration to pick up the brush. However, I’ve managed to overcome that creative block recently. I opted for a subject that wouldn’t overwhelm me, just to ease back into the rhythm.
Sometimes, the spark for a painting ignites in the most unexpected of places. I stumbled upon an image while mindlessly scrolling through a website — a black cat peering through a gap in some underbrush. It had that unmistakable anime flair, though it was likely crafted by “AI”. But I’m not writing this to dwell on that subject. Surprisingly, it triggered a cascade of ideas in my mind. Initially, I toyed with the notion of painting a fox in a similar setting in my own style, but I’d recently tackled that subject. After much deliberation, I settled on the concept you see above.
The title of this post may raise some eyebrows. For a few years now, Kate and I have been avid viewers of a YouTube channel called The Urban Rescue Ranch. It chronicles the adventures of an eccentric animal rescuer who transformed a property with a troubled history into a haven for various creatures. Lately, he’s been caring for a whitetail deer fawn named Boo Boo. Now, the painting above doesn’t bear any resemblance to Boo Boo, nor was it meant to depict him. Nevertheless, as I showed my progress to Kate she took to calling it Boo Boo as well. Hence, its title is now Boo Boo. Who am I to argue with her?
Possibly due to my extensive experience in designing t-shirts, I’ve developed a habit of working out the color palette for my paintings after sketching. Whether I’m working digitally with a plethora of colors at my disposal or painting traditional with a limited selection of tubes, I find it beneficial to establish a cohesive color scheme in advance. This approach simplifies the process of ensuring harmony among the colors, provided I choose them wisely. The color palette displayed above served as my guide, with every hue in the painting derived from it.
On a different note, writing this has reignited my desire to tinker with my website and perhaps delve into writing more. I’m brimming with ideas, spurred on by reading and visiting other blogs recently. I long for the era before social media’s dominance, when self-publishing was the norm. Despite not currently earning a living from the Web due to uncontrollable economic factors, my passion for it remains steadfast. Even if my enthusiasm for maintaining my own website wavers, my interest in the Web itself endures. I’ve grown weary of social media platforms; the only one I frequent regularly is what’s collectively known as the Fediverse, with Mastodon currently being its flagship. While I occasionally post elsewhere, my patience with these platforms is as thin as the hair on the top of my head. Ultimately, my own website remains the one true place I can call my home on the Web. Perhaps, like previous occasions when I’ve expressed similar sentiments, I’ll forget I wrote this and persist in neglecting my personal website, despite my better judgment. I hope not.
Wow. It’s been just slightly over a year since I last posted anything here. I’ve been really busy but at the same time not really feeling like writing about anything. It’s my fault. There is something I’ve wanted to write about but not wanted to write about at the same time if that makes any sense at all. It probably doesn’t.
Earlier this year I was commissioned by Greg from Gene Cox Grocery to design a logo for a restaurant he was starting and to also paint a large digital painting to put on the wall near the front entrance. I haven’t done a commission in quite some time, and I haven’t really wanted to. But, since it was Greg I agreed. The restaurant was to serve steaks, catfish, and shrimp — typical Louisiana cuisine. I was given little direction as to what to do with the painting other than it should be a bayou scene. Above is what I painted. It is 6 ft. × 3 ft. (1.8288 m. × 0.9144 m.), and is by far the largest thing I’ve ever done. It’s painted digitally to that size at 300 d.p.i. (118.11 d.p.cm). The painting took me about 2.5 months to complete after the initial sketches and color study were approved. I had to send it off to be specialty printed at the quality required, and Greg had to use the carpenter he used for remodeling the restaurant to build the frame for it.
I wanted to get a picture of it, but while it was hanging in the restaurant I was in Canada. Greg received and accepted an offer to purchase his entire restaurant and attached grocery. He immediately took down the painting, not wanting it included in the purchase. I’ll eventually get a picture of its hanging. Sadly I didn’t get to see its hanging in the restaurant itself.
It’s been a few months since my last ramble on here. I have a good reason, but I want to write about that in another post a bit later (if I remember). I have this big picture up above to write about right now. It’s a turtle — a really, really big one. Okay; that’s it. Enjoy.
Oh. Still here? Well, I started on this last year in December. I haven’t worked on it all the time since then, just when I had time and felt like it. The inspiration for this was from conversations I’ve had with Kate where she explained myths from her people which depicts the known world of eons ago (North and Central America) as a giant turtle. I started getting images in my head of this world of islands on the backs of giant sea turtles. Seasons would come about due to the migratory patterns of these turtles. The island I ended up depicting was to be windswept due to constant migration by its host. For most of the painting process I had no idea whether I wanted to make the island seem more inhabited or leave it mostly devoid of development. I came to the conclusion it just simply looked better when the viewer had to look carefully for signs of civilization rather than the island’s being one massive city or something of the sort.
The turtle is mostly just a giant hawksbill turtle but with a different, “older” looking face. I chose a hawksbill because they’re really interesting looking and because they’re endangered. When I intially conceived of this idea in my mind the turtle was to be covered in barnacles and stuff hanging off it in a way to depict visually a methuselistic age, but when drawing it the sheer scale of the thing made showing stuff hanging off it to be a tall order as they would be microscopic in size. When I first started painting in detail on the creature, the texture I painted in to represent barnacles and such ended up just making the turtle look filthy. That would in turn make it appear monstrous. It’s not a monster but an animal, likely magical because nothing that size could exist. Seriously, look at the shape immediately under its right shoulder; that’s a blue whale, the largest creature ever known to have existed — bigger than any dinosaur we’ve ever dug up. I, in the end, added a few barnacle-looking things on its body thinking if there’s mountainous turtles there’d be giant barnacles, too. I still dispensed with the texture, though. On the bottom there is a miles/kilometers/whatever long school of sardines. This was the most difficult thing I’ve drawn in living memory. It took forever to figure out how to get the scale right on them and to draw the ones closest to the camera; they’re the most detailed.
I struggled with the title since I first started on it. I never could figure anything out that fit, and truth be told I’m not entirely sure about the title still. I’m going to run with it anyway. I felt like it should be something simple, even one word. What I have chosen works even if I’m incapable of thinking of a better title. For the imaginary people who live on the turtle island it would be their home.
Usually when listing grievances concerning Photoshop or any Adobe product the typical complaint is about the subscription model. Believe me, I’m not a fan. This post isn’t about that; but first I do want to air some grievances.
I have a soft spot for Adobe Photoshop because it is how I truly began digital painting. Photoshop is a venerable program, having been released to the public since 1990. I’ve used it since 2.0 in 1991 on the Macintosh II. I was just a kid. There were no layers, only one undo, and no custom brushes; but it was magic. I would spend hours upon hours drawing random things in it with a mouse and later scanning my drawings in to manipulate them. I would use it for what it was designed for — to manipulate photographs — but the truth is that what I wanted to do was draw on a computer, and that wasn’t feasible for at least half a decade or more from then when I bought the first model Wacom Intuos tablet. I could finally draw on my computer.
Back in the day we used Photoshop because we had to. It was the only name in town for that sort of thing that was worth using. Painter is an application that is nearly as old as Photoshop is which is designed to more accurately mimic natural media and is geared more toward digital painting. Unfortunately, it is janky and incredibly slow; in fact it always has been despite decades of development and changing hands between companies. Deluxe Paint predated Photoshop which is something Amiga fans love to point out, but it was a bitmap pixel editor meant largely for low color 8-bit 2D 1980’s video game graphics; it was even developed by Electronic Arts. This is not a knock at pixel artists, but I’m talking about painting here. All of the other alternatives back then are not worth mentioning. Today, therearenumerousgoodalternatives for painting. Adobe even has another application which is purely meant for digital painting, but it’s only available on iOS and also Windows — if the Windows computer has a built-in pen display. Professionals who have paid upwards of $3000 for Cintiq pen displays are shit out of luck.
Photoshop has always been marketed and treated by Adobe mostly for photo manipulation. Digital painting, despite being important for two of the biggest entertainment industries in the world, has largely been treated as the equivalent of a red-headed stepchild. Bones were thrown at digital artists as far back as Photoshop 6 when what we think of the Photoshop brush engine today had its beginnings. Since then we’ve gotten arbitrary canvas rotation in CS4, the nearly useless mixer brush in CS5, finally a color wheel in Photoshop CC 2018, and sometime since then a clear blending mode for brushes.
Why Use Photoshop, Then?
A lot of the applications linked to above have surpassed Photoshop when it comes to digital painting, but I personally often find myself gravitating towards Photoshop because of its familiarity but also because manipulation tools are just as important when digitally painting as it is when manupulating photographs; they are either awkward to use, slow, or nonexistent in applications meant for digital painting. Also, if I’m starting something and Photoshop is already open I will often just use Photoshop; sometimes it truly is a whimsical choice. Out of the listed applications above I use Krita the most of all because its brush engine is more customizable than anything else I’ve seen. It is fast when painting even with large canvas sizes, but adjustments like levels, curves, and gradient maps need a lot of work. Krita contains a lot of quality of life improvements when it comes to user experience that I miss when using Photoshop. Krita doesn’t have a separate tool for erasing. You use a brush and set eraser mode when needing to erase; it’s an eraser button on the top horizontal bar. There is also an erase blending mode. To make things extra useful eraser mode can be toggled by simply pressing b. Recently in Photoshop a clear blending mode was added to do exactly this, but Adobe didn’t deign it worthwhile to assign a shortcut for it or to even make it possible to do so. Deep in Adobe Photoshop CC 2020’s release notes it states that ~ should toggle between brush and eraser mode using the same brush, but as far as I can tell that has never worked. I wanted a way to toggle this. I figured something out and thought I should share it.
AppleScript to the Rescue
AppleScript is a macOS feature that has existed as far back as I can remember, and it (sadly as will be seen in a bit) is probably my first introduction to programming, writing scripts to prank my dad on the aforementioned Macintosh II using a book a friend of his let us have for reference. I should mention that this trick is Mac-only, and I haven’t a clue what the equivalent on Windows is — or if there even is one.
To toggle between blending modes one must click on the dropdown on the top horizontal bar and manually select the blending mode. Nobody ain’t got no time for that. AppleScript has accessibility features that allow it to click buttons and select things in an application’s UI and after testing that Photoshop was somewhat exposed to this API I was able to come up with a script to toggle it for me:
That’s great, but there’s nothing in there about assigning the script to a keyboard shortcut.
Enter skhd
skhd is a third-party daemon that runs in the background that intercepts keypresses and performs shell commands. macOS has a built-in global shortcut feature that’s in the Keyboard system preferences pane, but it’s quite limited in the commands it can assign and what can be assigned. Skhd is designed for people who want to transform macOS into window managers like i3 using things like yabai. Yabai is in fact developed by the same developer as skhd, but I use it without yabai despite its original intent. My keyboard doesn’t have an eject key like typical macOS keyboards do so the control + shift + ⏏ for turning the screen off doesn’t work. With skhd I can assign control + shift + esc to pmset displaysleepnow, allowing the feature to be used on my escape key less keyboard. One can execute AppleScript through the shell, so skhd can be used in much the same way.
Skhd can be installed and started through Homebrew:
Configuration is easy, being a single file located at ~/.skhdrc. Documentation may be found on skhd’s GitHub page as to everything that can be done. I’ll instead focus just on what is necessary to execute the AppleScript above. Here is an excerpt of my configuration:
This snippet above tells skhd to bind control + e to osascript ~/Scripts/Photoshop/"Toggle Clear Brush Mode".scpt but only when Adobe Photoshop 2022 is the active application. This assumes the script is stored in ~/Scripts/Photoshop/Toggle Clear Brush Mode.scpt, so if wanting to store it somewhere else make sure the path is changed in ~/.skhdrc.
If everything goes okay pressing control + e in Photoshop should cause the dropdown to toggle between Clear and Normalblending modes.
Security
macOS has a lot of security features to keep arbitrary code from being executed without the user’s permission. Sometimes macOS’ security features while well-intentioned feel to be less thought out and especially far more annoying than they should be, and this is certainly one of those situations. When running skhd for the first time when starting it with Homebrew above it should cause a dialog box to appear asking for permission to access accessibility features; grant it permission. If one doesn’t grant it permission that dialog box won’t appear again, and anything skhd does will fail silently. It’s poorly designed as the user should never be able to put themselves in a situation where something they believe should work won’t without no obvious way to rectify it. This is a normal occurrence in Windows but hasn’t been in macOS. In case the dialog was accidentally dismissed, permission may be granted again by going to Apple → System Preferences… → Security & Privacy → Privacy → Accessibility. Clicking the + button will allow for selecting the executable to grant permission. If installed through Homebrew skhd will be located at /usr/local/bin/skhd.
I mentioned in Where Have the Years Gone? that Jeff and I have been working on a PHP HTML parser and an HTML DOM implementation and that I would like to write about it more; this is my attempt to do so. The impetus for these libraries was needing a better scraper for our RSS aggregator server, The Arsse. I had written an HTML5 parser back when the parser specification was new, and up until last week this website was generated using it. It was sloppy, in a single 20,000 line file, only supported UTF-8, and wasn’t updated for all the changes in the specification from when I first wrote it; it was simply not suitable for use in The Arsse.
The Arsse uses PicoFeed to parse feeds and scrape websites. PicoFeed was originally authored for use with Miniflux, and the library was abandoned when the developer decided to switch to Go from PHP — leaving us in a pickle. Thankfully, someone picked up the torch, and we’ve contributed to the project since then. However, we’d still prefer to write our own because of issues we’ve found along the way; we would also like to support JSON Feed, too, even though we have not entirely glowing opinions of the format ourselves. It borrows far too much from RSS and not enough from improvements brought forth by Atom — repeating 15+ year old mistakes in the process.
HTML-Parser
I began writing the new parser in 2017 based off of the WHATWG living standard instead of HTML5 while Jeff was still focused on The Arsse proper. I, unfortunately, became bored with it and moved onto something else; the process is mostly tedium. He decided it was time to work on it after beginning Lax, our in-progress feed parser mentioned above. His working on it got me interested in it again, and over time he became focused on the parser itself while I focused on the DOM. There were also a couple of branching projects that resulted from this, namely a set of internationalization tools that actually conform to WHATWG’s encoding standard (PHP’s intl and mbstring extensions don’t handle this correctly) and a mime sniffer library. Jeff wrote the entirety of these with my providing nothing but occasional input.
There are other PHP HTML parsers, most notably the widely used Masterminds HTML5 parser. Masterminds HTML5 parser isn’t very accurate and in some cases fails to parse perfectly valid documents at all. HTML-Parser conforms to the specification where it can. It is also extensively unit tested, including with html5lib’s own tests. Because of this it is also slower than Masterminds’ library. We believe this accuracy is more important — especially when we attempt to scrape websites that may or may not be well-formed at all. We need the result to be what a browser would parse.
Originally, the parser and an extension to PHP’s dom extension were included together, mostly existing to circumvent PHP DOM bugs when appending and when handling namespaced attributes. This, however, caused parsing to slow down a bit, and the more I added to the DOM to fill out missing features the slower it became. The decision was made to separate the two and bake the circumventions necessary for accurate parsing into the parser itself. This was a blessing in disguise which will become apparent later.
After an initial write and working out bugs when unit testing against html5lib’s tests we went through a period shaving off fractions of a second here and there optimizing it when parsing an extremely large document: WHATWG’s single page HTML specification. I think initially it was around 30 seconds on my computer. Today, it’s around 5.5 seconds. The official benchmarks listed in the README of HTML-Parser are from Jeff’s computer, one slightly slower than my own. We still have some more ideas for improvements which might shave a bit more off the top. However, we don’t want to sacrifice readability of the code; the code still needs to be maintained by humans. Well, Jeff might actually be a robot…
Initially, a conforming HTML serializer was part of the DOM part of the HTML-Parser library. I had written a fully functioning and unit tested serializer. After the two parts were separated into separate libraries, Jeff decided it should be part of the parser and wrote another one. I just finished writing my initial stab at a pretty printer for the serializer in HTML-DOM, so I migrated everything over to Jeff’s serializer when I was able to. HTML-DOM still serializes as it should, but it’s largely from HTML-Parser.
HTML-DOM
When initially writing the DOM classes they were simple extensions of PHP’s DOM using its DOMDocument::registerNodeClass method. As I dug deeper into the WHATWG DOM specification, I discovered that it was too difficult to follow the specification as I was running up against type errors in PHP’s XML-based DOM. The straw that broke the camel’s back was when the node passed to Document::adoptNode could not be passed by reference. Since the library wasn’t married to HTML-Parser anymore I was free to do whatever I needed without worry about how much it would affect parsing speed. My decision was to then wrap PHP DOM’s classes. I could then do whatever I wanted and let PHP’s DOM handle it internally. This benefitted me greatly as soon as I started running unit tests.
PHP’s DOM is at best a flimsy and buggy wrapper written to access a buggy and antiquated XML library that conforms to no specification whatsoever, new or old. It returns empty strings when it should return null in some circumstances. It has issues with namespaces, especially concerning the xmlns attribute. When inserting nodes any non-default (in PHP DOM’s case null is default) namespaced elements that are children of non-namespaced elements are prefixed with default. Same goes for attributes. Also due to what presumably is a memory management bug in the original xmllib the more namespaced elements there are DOM operations become exponentially slower. This leads us to use null internally while exposing the HTML namespace externally. In reality, there needs to be a new DOM extension for PHP, but that is beyond what I am capable of programming. Wrapping the classes allows these bugs to be circumvented at least.
It also allows templates to be implemented as specified. While templates work as specified in HTML-DOM, they were a colossal pain to implement because of the XML-based inner DOM. A lot of code is written in both HTML-DOM and HTML-Parser just for handling templates whether it is for parsing, insertion, and especially for serializing. In my personal opinion template elements are the most ill conceived thing in HTML at present. They were designed to be used within a modular loading system. One such system was specified and implemented in Blink, but some drama that I don’t quite remember the details of ensued; it never was implemented in anything else and was subsequently removed from Chrome. JavaScript modules are now supported in place of them. Template elements are treated differently when parsing and have different rules when manipulated with the DOM, and is an everpresent exception throughout the specification. Storing HTML and CSS in JavaScript is a constant source of consternation from old hats like me who during the web standards push in the late 1990’s and early aughts had separation drilled into us, but as soon as HTML imports were abandoned there has simply been no other alternative. It’s bonkers to have JavaScript append template elements to the document when the HTML and CSS for components can simply be stored in template strings in code. Having them already in the document is inefficient as well because they’re downloaded and therefore cached for every page; in JavaScript they’re downloaded once and cached once.
While developing this library I discovered another attempt to do something similar: PhpGt. Somewhere along the path of their development they came to the same conclusion that the built-in classes must be wrapped to get anything meaningful done. That’s where the similarities between the libraries end, though. It oddly wraps all PHP DOM classes such as DOMDocument, DOMElement, DOMText, etc. when only DOMDocument is necessary; handles templates incorrectly; has multitudes of factory classes that overcomplicates things further; and it also seems to follow Mozilla’s MDN documentation instead of the actual specification. This has led to widespread bugs and implementation errors because MDN is a JavaScript developer’s documentation, not an implementor’s documentation. Other differences are that PhpGt’s DOM implements individual element classes for everything while ours only presently supports what is barely necessary for the element creation algorithm. There are plans to support more in the future as time permits. Our library is also backed by a conforming parser while PhpGt’s isn’t. A full list of what HTML-DOM supports is available in its README along with any implementation details and deviations.
Last May I was playing around with Krita’s new RGBA brushes. It’s difficult to explain what they are, and RGBA isn’t descriptive as to what they are. In most graphics applications a mask is provided for a stamp used as the brush. This feature in Krita allows you to provide a color image, and it — color and all — would be used for the stamp. One side effect of this is that a grayscale image with bump textures can be used, creating the illusion of varied paint thickness on a brush by using the L*a*b* lightness channel as data for the brush’s stamp. If applied heavily the look of impasto can be achieved. Krita’s expansive brush settings can be used to control this by various ways. I don’t know whether I described this well or not. Krita has a demonstration video showing the wet variety of RGBA brushes that didn’t exist yet when I painted this, but it’s the same premise; view that if you’re interested in learning more.
Like with Keeping Watch this one began as a doodle as well, and with Kate’s encouragement it turned into a full painting where I experimented with brushes I created using the methods described above. I love experimenting with color, so I fired up Coolors, and pressed the generate button a few times until a palette popped up that was to my liking. I tweaked the colors here and there to make it harmonious with what I wanted to paint, but if my memory serves me it wasn’t anything drastic.
This is a practice painting, and it is referenced heavily from a photograph I found on Unsplash that for some reason I can’t seem to find no matter how I search for the subject matter. Composition was a bit different, and the fox was resting on rocks instead. The original image is 9921 × 7016 and would be about 84 cm. × 60 cm. when printed as intended.
What? After a year and a half of nothing but crickets from this website I am finally writing something? Yeah. It won’t be much, but I should start fresh somewhere at least. These past couple of years have been unpleasant, right? The world appears to be coming apart at the seams. Far too many people have quite literally lost their fucking minds, and I just didn’t feel as if anything I did or wrote here was that important. That, of course, yet again reneges on a desire I had with this weblog where I would try to reclaim a wee bit of the Web back for myself.
It seems to be a time of reneging for me. I’ve returned to macOS after stating that I would be moving over to Linux, and even before that I returned to Twitter after stating I wouldn’t be using it anymore. I have my reasons for both, but they are indeed uncharacteristic of me where I usually make a decision and stick with it. I moved back to macOS because being able to easily use applications necessary for my work and most importantly full color management outweighs any other benefits. It helped, too, that my issues around Clover became nonexistent when OpenCore came on the scene. It’s easier to use and more stable. I don’t know what I will do in the future when Intel support is dropped from macOS, but I will cross that bridge when I get there. I still do have Linux installed, and I do continue to use it — no, really.
Moving back to Twitter is a bit different. I never stopped checking it, but I was completely silent on there for quite some time. This led to a few people who don’t read my blog to wonder where I’d gone to. I’d like to thank them for checking in on me even though I was alright. It’s still the hellsite it was when I wrote that blog post a few years back, and of course it’s only gotten worse because apparently just trying to remain alive is political now. At some point — I forget precisely when — I unfollowed almost all political stuff on there, uninstalled all of the apps, and started conversing with some people again. I now check it almost exclusively on my computer where I can use custom scripts and stylesheets to remove crap from the webpage. I don’t use it like I used to, and I probably look at it once a day if that. I mostly keep it around to check in on people I know, view pretty art, and occasionally — that adverb doesn’t reflect the gigantic tumbleweeds rolling by — posting my own artwork.
My work is probably where I have suffered the most these past couple of years. While in the past I have used painting as a sort of therapy, recently I’ve been incapable of doing even that, and depression is the only thing that exists in the vacuum formed from that absence. The past several years have been a series of disappointments to me, disappointment in myself, and it’s been difficult to cope — especially since the advent of COVID-19. It doesn’t mean I’ve produced absolutely nothing. I’ve made a few things, but every time I have it just feels futile; few people seem to care. I am not saying this trying to frame my self-worth by my productivity or even by the popularity of my artwork (or in my case the lack thereof). That’s an entirely capitalistic thought process that — truth be told — only leads to self-destruction as an artist. No, it’s just that with every passing year I feel the clock ticking more easily than I could the year before, and that mindset is where the title of this post comes from. I have so far been focusing on my artwork, but my actual work plays a larger role. I don’t have health insurance, and I can’t afford it because paying for a policy would mean not eating or not paying for the house I live in. I make barely enough to pay my bills, and it’s always on my mind. I do consider myself lucky because I know plenty of people who can’t even accomplish that. Indeed, I am better off than I was five years ago when I was within a month of not being able to pay my mortgage. Yes, I have a mortgage thanks to prior employment; having that is something many don’t and won’t ever have these days. My work environment is pleasant most of the time, and I get along with everyone there. However, being genuinely thankful for the crumbs I’m given while being surrounded by Everests of cake isn’t gratifying. Every attempt thus far to improve this situation has been met with failure. No one — and I mean no one paying a living wage — wants to hire me. That alone is an ever increasing source of my depression.
Everything isn’t bleak, and I truly didn’t mean this post to be a huge downer. I wasn’t even sure I was going to include anything about my mental state because of my aversion to posting personal stuff online. Hey, I’m just being honest. Thanks to COVID everyone seems to have issues to work out, so I might as well hang my own laundry out to dry. It’s especially difficult for men in the world the way it is to express insecurities without receiving ridicule in return. I personally have in the past experienced that ridicule. Thankfully, I do have support from my girlfriend, and I am truly blessed to have her in my life. And, yes, I tell her this regularly. How she puts up with me I do not know. Just her presence alone has allowed me to continue on.
I have not been idle. I have worked on a few relevant things outside of work. Nearly a decade ago some time after the specification was released for the HTML5 parser I decided I would like to write an implementation of it in PHP because PHP’s DOM extension and parser were made for XML and couldn’t really parse modern HTML. I was successful. The library was a wreck, but it worked. I ironed out bugs as I encountered them, and up until just a few days ago this website was built using that parser as part of a static site generator. A lot has changed since then; template elements didn’t exist then just to give one example. I started on another one a few years back, but I became bored with it — squirrelling over to some other project whatever that was. Jeff then became interested in a parser for a scraper he was working on as part of our open source RSS aggregator, The Arsse. This got me interested in it again. After my initial work he focused on the parser side of things while I worked on the DOM. Two projects are the result of this endeavor, HTML-Parser and HTML-DOM. HTML-Parser allows for per-specification parsing of HTML documents, and HTML-DOM allows for per-specification DOM manipulation by wrapping PHP’s extremely buggy DOM extension and working around its innumerable bugs. Both are used in this website’s static site generator. In addition, I’ve written a syntax highlighter in PHP called Lit which does TextMate-style syntax highlighting. I hope to write more about each of these in the future. Oh, and Lit is also dogfooded by being used to highlight code in posts on this website.
This website also changed a bit, but it’s not a redesign. This was more of a retooling — an attempt to also use what I’ve been working on. The most notable change is the switch in the page’s footer which allows for switching between a light and dark theme. The last iteration of this website switched themes based upon one’s preferences, but this one also allows for dynamic changes as well. Unlike similar switches on many websites, mine is also keyboard accessible. Almost everything else is identical or nearly so.
Sorry if I rambled a bit in this post, but I tried to keep it as organized as possible considering this has been more of a mind dump than anything. I have written a lot of code in my absence. I do have some ideas for my artwork moving forward, but I will save that for another post. I think I’ve written enough for today.
This year has been quite the anxiety-inducing one, and the best way I know of for me to relax is to paint. I’ve always relaxed by drawing or painting for as long as I could rememeber. As a child my parents would just hand me a notebook and a pencil to keep out of their hair, and my family would gift me three subject notebooks for my birthday and Christmas. Truth be told they were as much a gift for my mother as they were for me. Today, I just paint or draw before bed to relax and empty my brain before sleep. I’ll work on the same image for months. I started on the one above in late April.
I had been struggling for a while coming up with an idea for a painting I could do, and I was just looking through random images. I ran across a photo of a gas station that was lit from above by a sodium vapor lamp with the scenery outside of the lit area being blue. I knew I wanted to do something with that kind of lighting. It just took much longer to figure out what the subject matter was going to be. I just came up with this idea of raccoons raiding a vending machine randomly.
I think I had the most fun on this painting working out the lighting, so much I introduced more light than would normally emanate from a vending machine, including the area at the bottom where the one raccoon is digging inside the machine for goodies. It sort of glows like the briefcase in Pulp Fiction. Another fun part was painting the labels on the snacks in the machine. They’re all mostly vaguely like real world products but yet not. Some are a bit vulgar, too. Of course!
The original of this image is 6136 × 8496 and would be about 24 cm. × 34 cm. when printed as intended.
These are quite interesting times for sure. I mentioned in my last post I already had by that point a couple of posts lined up, but I didn’t see them as important anymore because of the COVID-19 pandemic and the seemingly perpetual dumpster fire that is my country. My thoughts on this haven’t changed any. I’ve decided to post this one anyway, mostly because I’ve let my blog go silent again and that I need to do things that don’t involve work which keep my mind off of the daily atrocities that are occurring in my country. I almost didn’t hit publish on this post because it just seemed totally trivial. Maybe I just want to write something here no matter how inconsequential it appears to be.
This post has been the one that has rested the longest in my backlog, having started it in January of 2019. This is because what I was wanting to write about changed a lot, and I made the mistake of starting to write it at the beginning of a transitional period instead of at the end of it. Back then I was just interested in drafting a review of various FOSS desktop environments, starting with GNOME. I have years of experience in running Linux as a server, but I didn’t have a whole awful lot of experience running it as a desktop operating system. Again, I’ve tried them out from time to time over the years. I haven’t been entirely ambivalent to them, but there is a huge difference between trying out an operating system and its desktop environment in a virtual machine and installing the operating system and desktop environment on actual hardware. That’s before actually attempting even to perform daily tasks on it; one’s priorities change when running an OS as their primary. I wanted to take these desktop environments for a spin on a separate drive on my main computer and attempt to get actual tasks done with my multiple displays and other hardware. What happened next was surprising.
The Hackintosh Failure
I outlined my early history with the Macintosh, many of my grievances with Apple, and my initial findings in setting up a hackintosh in Knowing When to Move On. I won’t reiterate them here. I will, however, add to them. The experiment ultimately failed. The bootloader kept getting corrupted not long after I started on my review of GNOME, and I was incapable of keeping the system updated. One day I booted into Windows to play a game, and when I was done with that I rebooted into macOS and couldn’t boot it anymore even when I tried overwriting the bootloader from a backup. I was done. There are a lot of bullshit things one must do to get everything on a hackintosh system working perfectly, and I really didn’t want to spend a long time getting everything back just like I wanted it. I also was stuck on macOS 10.13 High Sierra because of Apple’s disagreements with Nvidia as I have an Nvidia card. It also doesn’t help that the latest released version of macOS is garbage because Apple prioritizes new features over bug fixes. Hey, don’t just take my word for it. I am also concerned with all of the features and behaviors from iOS which have been creeping into macOS, especially Catalyst applications which allow developers to essentially click one checkbox and have their iOS applications run on macOS. None of the existing applications which use Catalyst on Apple’s released operating systems are worth using, and it looks like that is going to be the case in the near future. The Mac is going to be filled with applications by developers who think that clicking a checkbox to build for the Mac is sufficient enough for a release, and the number of quality native applications for the Mac is already less than what it was a decade ago. What I see in macOS 11 Big Sur doesn’t give me hope; it actually fills me with dread akin to what Windows Vista did in 2006 although admittedly not quite to that degree. I can see a future where I would be completely incapable of using a Mac as my primary computer because of Apple’s propensity for removing features in favor of half-baked and often completely broken replacements. After the keynote for Big Sur people were confused and worried thinking Apple completely removed the terminal from the operating system because of the way they demonstrated the Linux virtual machine; that didn’t end up being the case, but it’s telling that people could believe Apple would remove access to the terminal. It’s gotten to where every year people look for things which are missing in new releases of macOS as they slowly morph it into a desktop version of iOS. I don’t want to use iOS on my desktop; in fact I’d prefer never to use iOSat all.
Since I last wrote on the subject Apple released a new Mac Pro as a tower computer that is upgradable which addressed one of my grievances in the prior post. Unfortunately, it is almost entirely made for the professional film industry and is priced accordingly. Their new display intended for use with it alone is $5000 and doesn’t even come with a stand; the stand is $1000. Yes, that’s the correct number of zeros. The display itself isn’t VESA compatible and requires an adapter to even mount to a third party stand; the adapter is $200. The display has a super high color range that is really only useful again to the professional film industry for color grading. I’m sure there are some other obscure uses, but the reality is that 99.999% of the market has absolutely no use for the display. Don’t get me wrong. I’d love to use one, of course. One that can do Adobe RGB is quite alright with me which it in of itself is on the higher end of the display market. The Pro Display XDR can produce colors far outside of the Adobe RGB color space. Their pro computers as they are today are luxury items instead of being able to service the professional market as a whole. At those prices Apple probably fancies itself the tech equivalent of Tiffany & Co. My computer isn’t for show; it isn’t a luxury item for materialistic idiots; it’s for getting work done.
The entire rest of Apple’s lineup isn’t suitable for my uses on my primary computer, and two years ago this problem is what brought me to hackintoshing. Apple just a short bit ago announced they’re switching Macs to their own processors like what they already use in the iPhone, iPad, and Apple Watch. I really do wish them luck in this endeavor. I just cannot follow it for my primary computer. Apple with this move will lock down their computers even more, and it will make their monopoly in ARM silicon even stronger. Apple has a performance monopoly when it comes to ARM processors for the consumer market as no Android device can even remotely come close to Apple’s CPUs (or GPUs for that matter) in performance. Hackintoshing is likely going to become a thing of the past because both the processor architecture switch and the fact that third party offerings don’t measure up in performance. Needless to say it’ll be a multi year process and likely be slower than their PowerPC to Intel transition. I am, however, excited by the move as I wonder what they have planned for Macs now and am hopeful they’ll start producing hardware that is worth purchasing again that doesn’t require taking out a mortgage to purchase. I just don’t see myself going back anytime soon for my primary. I don’t trust Apple to make consistently good hardware anymore. I don’t want to compromise again by buying an all-in-one computer that becomes useless when just one component of the unfixable-by-design computer breaks. The nearest Apple Store to me is 2.5 hours away.
No, It’s Not the Year of the Linux Desktop
The Year of the Linux Desktop is thrown around as a running gag. It’s even used as an insult to Linux users by Apple and Microsoft fans. There are very little computer manufacturers selling computers with Linux preinstalled on them, and that isn’t likely to change in the near nor distant future. The days of the desktop computer for anyone but enthusiasts is waning. Most people are not buying personal computers. That makes perfect sense because most people just need their computers to be appliances; phones and tablets are perfect for that. I am not new to Linux. I’ve been using it for years on my servers, but I am no wizard at that by any means. This website is served on a Linux server, and all of my websites for the past 20 years have been served on Linux. I have also run various distributions on my secondary computers over the years but my actual time with them has been limited. My primary computer has been consistently macOS-based for the past 14 years. That has now changed, and I can almost feel the unshaven hairs growing longer on my neck as a result.
The burning question probably is, “Why Linux?”. I don’t have any ideology about computer software. I love free software. I’ve even contributed to some projects myself, but I will gladly buy good software if it does what I need. The honest truth is there’s nothing else to really move to. Windows 10 is bloated, slow, woefully insecure, costs ridiculous amounts of money, violates its users privacy, and umm… those fucking forced updates. I’ve already described macOS’ faults in detail above and in previous posts, so that doesn’t need repeating. It does seem like I am moving to Linux just because I feel like I have no other option; that’s not true. I am quite happily using it, and I am enjoying learning things about Linux that only come from using it as a primary operating system. It makes me feel like it’s 2006 all over again, abandoning Windows completely and using the first model Mac Pro as my primary. What made me love Mac OS X was its Unix underpinnings; while it presented a working environment that didn’t require access to the terminal at all the terminal was there for power users and the Unix folder structure — while hidden for everyday users — was there. I could automate repetitive tasks really easily without writing code in something like C, could explore the internals of the operating system, and after Homebrew came out installation of software required just a single command. Linux provides me an operating system that is in some ways a lot like what Mac OS X was, not how macOSis today.
What Now?
Linux is greatly fragmented, and that is both its greatest strength and its greatest flaw. This fragmentation does provide users with a lot of choice of different approaches, but also that choice makes it ridiculously hard to decide on what to use as an end user — at least it has for me. The choices are nearly endless as one can literally customize every aspect of their operating system. Out of all that there is to offer because of my affinity for macOS one might think I would have ended up using elementaryOS. I didn’t.
While I don’t want to go into great detail on my assessments of different desktop environments because I might write about them later, I will say a bit about my distribution choice. I have tried most of the popular distributions. Ubuntu is the most well known if not the most popular one. I currently run Ubuntu on my server and really like it for that, but I found I did not like it as a desktop distribution because most of the software available in its repositories are outdated. In the Linux world distributions are typically either fixed release or rolling release, either updated in versioned updates or gradually over time as individual packages themselves are updated respectively. Ubuntu is a fixed release OS like macOS is. However, unlike macOSall applications whether they’re part of the OS itself or not are traditionally installed via Ubuntu repositories; they aren’t updated as frequently as the application developers themselves update if at all between point releases of the operating system itself. Ubuntu has a solution for this shortcoming, Snaps, which are a way to bundle applications and distribute them independent of the operating system. Questions about Canonical’s absolute control over the distribution system itself aside, I find Snaps overengineered and quite like trying to mend one broken leg by cutting the other off. I really would like to avoid them and other things like it for the time being, so quickly realized I would like to use a rolling release OS. I narrowed down my list to OpenSUSE Tumbleweed and Manjaro because they both are rolling release; software on their repositories aren’t bleeding edge; and they’re tested before release which I quite like. I ended up going with Manjaro because almost all of the applications I like to use were available without adding additional repositories or resorting to Snaps, Flatpaks, or AppImages; the rest were on the AUR. It also didn’t hurt that Jeff was fed up with Windows by this point and jumped head first into installing Manjaro while I was still taking my dear sweet time trying to decide. I then eventually landed on KDE Plasma as my desktop environment, and I am mostly happy with it. I can do almost everything I need to do with a computer on Linux, and with free software at that. In fact most of the programs I used on macOS are available for Linux, so I am even using the same applications. For others such as the macOS iTunes/Music app there’s alternatives for Linux which are actually quite a bit better. Not everything I have found alternatives to have been better; email applications are really lacking on Linux compared to macOS. I am using Evolution at the moment until I can find something less bloated and antiquated in user experience while still supporting CardDAV.
I, unfortunately, haven’t been able to find good alternatives to the Adobe Creative Suite. GIMP is the most well known Photoshop alternative, but I absolutely hate using it. Its user interface and experience is quite frankly really weird and janky. Color management also only works on a single display, and feeding it color profiles to use causes it to crash. Without color management it’s pretty much useless for me. It also cannot work in CMYK or L*a*b* which makes it not useful at all for printing and for manipulating color in photographs. Inkscape looks and works like CorelDRAW from 1999. I’m sure it’s quite capable, and I have tried to use it quite a lot but then get frustrated trying to do even the most basic of tasks. I just don’t want to fight its awful user interface to get work done. Krita is quite possibly the best painting application in existence, free or not, and I have used it already in my work. For painting I feel like I’m quite covered, but for everything else I’d need Adobe’s apps. I am currently running Adobe’s applications in a Windows virtual machine guest. This presents a problem of course because Adobe’s applications make liberal use of GPU accelleration, and there’s a huge difference when there isn’t any. The solution to that is GPU passthrough using QEMU. I can give the virtual machine a secondary GPU and a display, and the virtual machine runs almost as if it’s on bare hardware. Barrier is then used to share my mouse and keyboard with the virtual machine. Works great. Most of the time I am just doing something lightweight that doesn’t really require GPU acceleration, so I can just run the same virtual machine in a window using the default SPICE video.
In some people’s opinion my virtual machine setup wouldn’t be preferable to just running Windows as my primary. I disagree. I can strip Windows 10 down to just what is required to run everything for Adobe’s Creative Suite, and I can not worry as much about getting the OS configured just how I want it or worry about its myriad of software failures that pop up at the most inopportune times. I keep a backup of the VM so when something fails I am not stuck having to reinstall everything or searching the internet for esoteric Windows errors. It’s just there to run Photoshop, Illustrator, and InDesign when I need them and whichever way I wish to run them.
A big disadvantage Linux has over macOS is color management. Everything one sees on the screen on macOS is color managed and has been since 1993. Regardless of one’s display’s capabilities the colors on the screen are clamped to sRGB by default. This means that on displays which support color gamuts larger than sRGB colors in the user interface don’t look oversaturated. Applications which require support for additional color profiles such as Photoshop and web browsers can access them and apply them to their open documents. However, the UI remains sRGB. Linux (and Windows for that matter) allow for custom color profiles, but it only applies LUT values and doesn’t clamp the UI to sRGB, making any color used in user interfaces be oversaturated on my display. There is no way to fix this, and explaining this problem to people who have never used high gamut displays is like trying to explain blue to a blind man. I can live with this shortcoming mostly by using a neutrally colored theme which I would be doing anyway to avoid having the UI affect my perception of colors when painting or designing. Most applications which really need it like Krita and web browsers correctly change the color and not just LUT values.
Wacom tablets are supported out of the box on Linux which is quite nice with a driver that is quite well documented and doesn’t exhibit the application conflicts that the official driver has on Windows and has been having in recent years on macOS. However, depending on the desktop environment one is left with varying degrees of interfaces to configure the tablet stretching from no GUI at all to fully featured. Xfce has no tablet configuration tool. Cinnamon has a fully featured one. GNOME and KDE Plasma’s are problematic, although in different ways. After struggling getting what I wanted with Plasma’s tool I ended up writing my own. Aside from initial configuration I have no need for a GUI tool, so it works well and is easy (for me) to modify if I want to change its configuration in the future. While initially not having a GUI configuration tool for my tablet was a problem the fact that I am able to easily write a tool myself because the driver is accessible via the command line is a huge advantage Linux has over either macOS or Windows.
One big advantage over macOS is that I don’t need to boot into Windows to play most PC games. There are quite a few technologies that allow Windows games to run as well or better than they do on Windows in Linux. There is of course Wine, and Steam has a thing called Proton which is based upon Wine to run games on Linux. On top of that there are applications such as GameHub and Lutris which are designed to make it really easy to configure games and manage them, especially ones which aren’t bought on Steam. I like to play the occassional PC game with my girlfriend, so it’s really nice not having to boot into Windows to do so. So far pretty much every game I’ve wanted to play runs just fine. I refuse to buy games with anticheat malware or spyware. They don’t run in Wine/Proton due to the nature of what they are, anyway.
Whew. This, like my hackintosh before it, is an experiment, but it looks like I’ll keep this experiment going. It’s a good start, anyway. I would like to write more about my findings on Linux in the future and maybe do a few projects of my own in the Linux world. I can say I haven’t been this happy using my computer in quite some time, and the only thing I have to sacrifice is that I need to run Adobe’s applications in a virtual machine. It’s been nice finally getting this post worked out. Hopefully the next one won’t be as difficult to author.
I had a couple of other posts planned, but they both seemed quite unimportant while people are dying of the COVID-19 pandemic. Art, however, is. It’s definitely one of the best coping mechanisms I know of when trying to get myself through difficult times, anyway.
This image has a bit of backstory to it. A few years ago my girlfriend was into taking various personality tests, and there was this one that associated you with a particular animal based upon your personality traits. I don’t remember what it is called or I would link it here, but there was a methodology to it and very detailed thought processes that went into the animal selections. It even had a very active forum with people discussing the topic. Upon taking the test mine came out as a river otter, my girlfriend a red panda. Months later I bought her a stuffed red panda, and she reprised it with a river otter for me. She selected “Toffee” as the name for her red panda, and I named my river otter “Nonsense” after the frequently-used pun “otter nonsense”.
I have been experimenting with casein paint a lot lately. My previous post on this blog in fact is of a polar bear painted in casein. I have a few others that I have done that I will get to posting here perhaps in a single post with all of them? I am not too sure yet. I actually began this one in late December while my girlfriend spent some time with me over Christmas. We painted together a few times, and for the longest time this was artist taped to the top of my cedar chest in my office barely worked on as I moved on to other projects and other paintings while I devised in the back of my mind what I wanted to do with this piece. It was initially painted in watercolor using a set of Winsor & Newton Cotman watercolors my girlfriend brought with her here. My intention was to use both it and casein, but it didn’t turn out that way in the end really. I initially had this more abstract water background that was reminiscent of swimming pool caustics going on I wasn’t happy with. When I finally got the nerve to work on it again I brushed masking fluid over the otter and painted over the top of the background with casein.
The extremely vibrant blue used in the water is not a color available in Richeson’s tubes. It’s a primary blue cyan made from some Sennelier pigment I bought off of Dick Blick. To make casein paint I simply add some pigment to casein emulsion, and I have casein paint to paint with. I might experiment in the future with more pigments as Richeson’s color offerings are a bit slim.
As for the subject matter itself: river otters sometimes will float on their backs, but as far as I know they never hold their food while in this position. I felt like I wanted him holding something, so in his paws I put a bluegill for him to munch on later. They are a fish prevalent where I live in Louisiana.