Oak Leaf Clusters

Correcting a Faulty Application Installation

Note: This is really a set of notes so I can think things through accurately. It might be instructive to someone else who has similar issues.

Background: Newly assigned to an IT program as a database resource, I have been allocated an AWS EC2 VM (m5.large, 100GiB size, CentOS 7). Now it is time to configure the VM, install and configure the tools I will need to collaborate in the DevOps environment with the new teammates. Git, Jira, Confluence, Jenkins all seemed to install correctly and have been appropriately configured. VS Code 1.56.1 installed, but failed to actually run. The issue was a missing library (gcc 5, including CXXABI_1.3.9). VS Code has a lot of customers. The more recent updates require this module to work with UBUNTU (and I assume other linux flavors). CentOS may be considered “moribund”. The workaround for VS Code is to install the version 1.51.1. The next step is install pgAdmin!

Following local instructions, I install postgreSQL 11. No apparent issues.

Following local instructions, I install pgadmin4-web. yum reports, “Package pgadmin4-web-5.2.1.el7.noarch already installed and latest version. Nothing to do”

Following local instructions, sudo su. This reports an error, “ABRT has detected 1 problem. For more info run: abrt-cli list –since 1621010402”. Running the abrt-cli command, I learn that “xdrp-keygen killed by SIGABRT“. I do not think this is a factor and continue.

The next command in the instructions is sudo python3 /usr/pgadmin4/web/pgAdmin4.py &. That returns a ModuleNotFoundError: No module named ‘cheroot’. This occurred in the pgAdmin4.py operation at line 16 in module from cheroot.wsgi import Server as CherootServer.

Next course of action: If python cannot find cheroot.wsgi, I’ll try to load that myself. The first attempt failed, using pip3 install cheroot-wsgi But error messages indicated that locations in the directory were not writeable, which induced me to try sudo pip3 install cheroot-wsgi and that worked.

Overall, the key to the end of this part of the story is sudo pip3 install cheroot-wsgi.

An unfortunate series of events …

So I have recently started in a new position. The first few weeks after joining a team are hectic. There is a lot to learn about the team, the project, the processes and customs of the new environment, et cetera.

One of the on-boarding tasks is to customize the AWS-provided, Centos 7 virtual server to conform with the recommended development configuration. That customization includes the latest stable edition of google-chrome AND Visual Studio Code (VSC). Here is where it gets complicated.

The latest version of VSC doesn’t work on Centos 7. It was broken in a previous version, when it was upgraded to work with the latest versions of Ubuntu Linux. I assume there was not effort to create a Centos version because Centos has been perceived to be moribund.

The workaround is to use VSC version 1.51.1, the last version to work on Centos 7. So far, so good. But subsequent steps require install of google-chrome. And when yum begins the transaction to install, it uncovers a dependency on VSC 1.56.1. Completing the install of google-chrome seems to trigger the undesired upgrade of VSC 1.51.1 to VSC 1.56.1. And that does not run on Centos 7.

An alternative course of action might be to install the libraries for gcc 5. But as I said, I’m the new guy around here. I am unwilling (and unable) to make configuration changes to this virtual server without the consequences. Should anyone have any idea how to resolve this, I would grateful for your help.

On voting

I am constantly bemused by the ever-increasing number of people who feel that it is their God-given right to tell me what to do and how to do it. Nay, they feel it is their absolute duty to do so. And they insist they get paid for the privilege. There is little I can reasonably do about it.

At least, with a vote, I can try to constrain them to the people who share the same values I do.

But it happens, all too frequently, that the people I vote for fail to win.

Clearly, I must work harder to persuade more of my fellows to my point of view.

Just Trying to Keep Up

There are times when you just don’t know what you don’t know. Every day the universe expands its horizon, usually because I step over the edge. Typically, I stumble across a new feature that I goggle at, and wish that I had the time to dive down that particular rabbit hole.

So, today, I start mapping the rabbit holes, with the fond hope that I will remember to come back to them when some segment of empty time becomes available. Here’s the list:

I will update this list as I run into more topics to explore.

PowerShell Remoting

So, you’ve got a new laptop? Congratulations!

Do you need to look at or get files from the old laptop? No problem! We’ll just fire up PowerShell and do a little remote computing to salvage what’s necessary from your previous work.

$cred = Get-Credential
$so = New-PSSessionOption -NoMachineProfile 
New-PSSession -SessionOption $so -Credential $cred -ComputerName "Server1"

What? You say you get an error message?

Set-WSManQuickConfig : <f:WSManFault xmlns:f="http://schemas.microsoft.com/wbem/wsman/1/wsmanfault" Code="2150859113"
Machine="localhost"><f:Message><f:ProviderFault provider="Config provider"
path="%systemroot%\system32\WsmSvc.dll"><f:WSManFault xmlns:f="http://schemas.microsoft.com/wbem/wsman/1/wsmanfault"
Code="2150859113" Machine="win81"><f:Message>WinRM firewall exception will not work since one of the network
connection types on this machine is set to Public. Change the network connection type to either Domain or Private and
try again. </f:Message></f:WSManFault></f:ProviderFault></f:Message></f:WSManFault>
At line:1 char:1
+ Set-WSManQuickConfig -Force
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~
    + CategoryInfo          : InvalidOperation: (:) [Set-WSManQuickConfig], InvalidOperationException
    + FullyQualifiedErrorId : WsManError,Microsoft.WSMan.Management.SetWSManQuickConfigCommand

You can fix that.

Set-Item WSMan:\localhost\Client\TrustedHosts -Value *

At least that’s what worked for me. Your mileage may vary.


The transition from “old guy who’s an Oracle DBA” to “new old guy who’s a SQL Server AND Oracle DBA” continues, slowly … oh so painfully slowly. One of the complications is adopting Microsoft tools with all the complexity and that come with them … and putting aside the familiar Linux/Unix/Solaris tools like bash.

For me, the premier Microsoft tool of choice is PowerShell. That, in itself, is a long slog. There is a lot to learn, and the PowerShell community has techniques to accomplish tasks that I am learning, one by one.

Jupyter notebooks

The real, underlying issue is the difficulty I experience trying to install enough software to begin to learn how to build a .NET Jupyter notebook. I still haven’t completed that task.

It’s a relatively straight path from “fat, dumb, and happy” to stressed-out old guy with an predisposition to use strong, abusive language. Somebody, somewhere recommended “BridgeConf” and I gaily went down that rabbit hole. The Microsoft .NET team had whipped up Jupyter notebooks so data scientists can work, research, code, and preserve the results in one handy little package.

Rob Sewell gave a heck of a presentation on .NET Jupyter notebooks and I was hooked. Unfortunately, things just didn’t go right … The initial set of instructions (https://devblogs.microsoft.com/dotnet/net-interactive-is-here-net-notebooks-preview-2/) persisted in failing for me. The blog post author, Maria Naggaga, hasn’t responded to anybody’s comments for three months. And I’m left with an incomplete install; more installed software than an honest DBA needs; a foul mouth; and uncharitable thoughts about Microsoft systems, software, and practitioners.

Here’s the comment I left:

The trials and tribulations of a New Guy.

Following the blog post, I
1. Download and install Anaconda
2. Download and install dotnet Core SDK (v. 3.1.201)
3. (At the Anaconda prompt), run jupyter kernelspec list. And receive
python3 ~\jupyter\kernels\python3
4. (In the command shell), I invoke > dotnet tool install –global Microsoft.dotnet-interactive
5. I return to the Anaconda prompt to run > “C:\Program Files\dotnet> dotnet interactive jupyter install
6. And I’m rewarded for my efforts with:
Could not execute because the specified command or file was not found.
Possible reasons for this include:
* You misspelled a built-in dotnet command.
* You intended to execute a .NET Core program, but dotnet-interactive does not exist.
* You intended to run a global tool, but a dotnet-prefixed executable with this name could not be found on the PATH.

Where did I go wrong?

I’m still searching for a solution …

SQL Server: Encryption

These are study notes for my transition from Oracle DBA to SQL Server DBA.

  • SQL Server database is provided by a series of “keys”.
  • Windows Data Protection API provides the Service Master Key (SMK). (Footnote: What happens when SQL Server is installed on a non-windows machine?)
  • The SMK protects the Database Master Key (DMK).
  • The DMK protects symmetric private keys, stored in certificates, and asymmetric private keys that protect the symmetric key certificates.
  • These private keys are used to finally encrypt the SQL Server data.

Cell level encryption:

  • is also known as column level encryption or granular encryption.
  • Only encrypts data at rest.
  • Encrypts entire columns (not individual cells).
  • Builtin functions to implement column level encryption
    • EncryptByKey/DecryptByKey
    • EncryptByAsymKey/DecryptByAsymKey
    • EncryptByCert/DecryptByCert
  • To implement cell level encryption, applications will need to change (or be customized for cell level encryption)
    • Source column needs a complementary VARBINARY target column
    • Required permission – ControlDatabase
    • Required permission – CreateCertificate
    • Required permission – AlterTable
    • Required permission – at minimum, ViewDefinition of the key or certificate
  • Because the encrypted data changes with every encryption, the data cannot be indexed, primary and foreign keys are do not work
  • Range scans, equality scans must be executed by full table scans


  • <principal> is the database user

Technical minutia

This post discusses a technical problem that I very recently resolved. The mission is to load Kali Linux in a Virtual Box hosted by a Windows 8.1 laptop. After installing Kali in its virtual machine (VM), the next step is to update and upgrade the operating system. All of this is accomplished using apt, high-level commandline interface for the package management system. Now, what makes this all so interesting is that before I started this, I couldn’t even spell apt, much less know what it means or how it does its work. I’m doing all this work by following instructions in a pdf file.

So, the command to update and upgrade the Kali Linux operating system is apt-get update && apt-get update -y. apt responded with:
Get:1 http://archive-5.kali.org/kali kali-rolling InRelease [30.5kB]
Err:1 http://archive-5.kali.org/kali kali-rolling InRelease
The following signatures were invalid: EXPKEYSIG ED444FF07D8D0BF6 Kali Linu Repository <devel@kali.org>
Reading package lists... Done
W: GPG error: https://archive-5.kali.org/kali kali-rolling InRelease: The following signatures were invalid: EXPKEYSIG ED444FF07D8D0BF6 Kali Linux Repository <devel@kali.org>
E: The respository 'http://archive-5.kali.org/kali kali-rolling InRelease' is not signed.
N: Updating from such a repository can't be done securely, and is therefore disabled by default.
N: See apt-secure(8) manpage for respository creation and user configuration details.

Not what I was looking for. The search for the answer began with the author of that pdf recipe I was following. Give the author credit, he did finally send the correct solution to the problem. But I was impatient and needed to be smarter about the commands I was running. The next step was RTFM … read the feckless manuals.

Reading the manpage for apt-secure(8) was intimidating. The job of apt-secure is to prevent unauthorized altered archives from being circulated in the wide, wide world and masquerading as the genuine article. I will return to that manpage in the near future and study it very closely until I make it all clear to myself. But, at the end of the day, the solution to this problem was not found here.

The Stack Exchange actually answered the mail for me.  On their Unix and Linux page, marshki got me in the right neighborhood with a recommendation to first update the signature

sudo apt-key adv --keyserver http://http.kali.org --recv-keys ED444FF07D8D0BF6,

That made sense to me. The string, ED444FF07D8D0BF6, figured prominently in the error messages I received. Intuitively this must be the key itself, or the key identifier. As it turned out though, this didn’t do the trick.

The follow on post, by Steve (and edited recently by PeterH) actually solved the riddle.

apt-key adv --keyserver hkp://keys.gnupg.net --recv-keys 7D8D0BF6 

Clearly, this is going to work better when I go to the correct key server. And I find it very interesting that of the eight hexadecimal characters in the key signature, the last four (7D8D0BF6) appear to corelate to the receive keys.

Once the correct keys were obtained from the correct key server, I was able to update and upgrade the Kali installation.

Here are some final observations:

Nothing, but nothing, ever gets done without a lot of very hard work.

It is always a risky practice to attempt to simply “follow a recipe”. Read the procedure, understand what the commands are doing, and pay attention to detail.

Kali is a specialized tool, an off-spring of debian Linux, designed for ethical hackers to perform penetration testing.

Time Zones

I thought it was going to be easy. The issue is:

Given a global enterprise that records world-wide transactions with a timestamp in UTC, display for each transaction the local time of the transaction.

Hmm … the Oracle NEW_TIME function ought to fill the bill. Now, how do I determine which timezone the transaction belongs to?

It appears I will need to use tz_world, an efele.net tz map (or its successors) to assign a time zone to each transaction location, and calculate the new time from there.

This is a work in progress … stay tuned as I dream up the solution.