DIY Second Keyboard (Script Triggering)

Here’s my idea:

  1. Plug a USB keyboard, possibly with custom labels on the keys, into a
  2. Raspberry Pi configured with XBindkeys to run
  3. a Python script that sends a trigger over a TCP/IP socket (via your LAN) to
  4. an AHK socket server that will then run the local AHK script desired.

Pros:

  • All parts can be re-purposed if the dedicated script-trigger keyboard is no longer needed.
  • Can be very inexpensive (I already own the hardware to do this).
  • Should be low-maintenance once working, as the Pi should auto-boot to the correct state and the info sent over TCP/IP should just indicate that a certain key was pressed.
  • I feel like I can handle the technology involved.

Cons:

  • I haven’t even started trying to see if this works.

Every so often when I’m using AutoHotKey scripts I wish I had a second keyboard dedicated to my various scripts. While something like the AHK Command Picker is probably all I need, the Enterpad is really more what I’ve been dreaming of:

enterpad_application_desktop_english_e2_thumb

Unless I’m given a free one like Daniel Schroeder, it’s pretty hard for me to justify close to $300. But a DIY setup? That might just happen.

DIY Second Keyboard (Script Triggering)

Migrating Windows 10 to a smaller SSD

Last year I went through quite the ordeal migrating Windows 10 to an SSD smaller than my old hard drive. This may not be the best way, but the following are the steps that contributed to the successful migration (as opposed to all the missteps that made this process far longer and more painful than it needed to be). I’m addressing my future self when I attempt something like this again.

Get Ready

Backup data. First of all, make sure you’ve backed up what matters to you. In 2015 I was paying for CrashPlan‘s cloud backup, so that was my solution there. It certainly wouldn’t hurt to use Clonezilla to create an image of your drive before you start doing surgery, but I didn’t, because I like to live dangerously.

Create live USB drives (all three).

Lose Weight

Bottom line: you cannot transfer more data than your target SSD can hold.

Remove low-hanging fruit. I was already in the habit of using WinDirStat to find the biggest space hogs. I love the information-rich visualization. Remember to run with administrative rights so that all the system files will be analyzed, instead of just showing up as “Unknown” and scaring you.

The easiest targets for me were games that sucked up a lot of space storing data that I can just download again from Steam or Blizzard. Because I’ve been burned before, I used GameSave Manager to backup my game progress before uninstalling.

Let cloud data stay in the cloud. Since Dropbox and Google Drive already have all their data in the cloud, I used selective syncing to stop syncing pretty much all folders.

Cut deep with care. Since I was desperate for space, I went a little crazy. I turned off System Protection for my drive (in the Properties for the drive). I turned off Indexing (and did it in such a way that I’m still struggling to get it working again, so . . . not recommended). I tried to compress the entire drive (also in the Properties for the drive). And in the end I completely disabled paging and swap (run: systempropertiesperformance), although that was more to make it through the next step.

Suck It In

Defragment/Optimize. So, even after you can theoretically fit, you’ve got to get all your bits packed at the beginning so that you can safely lop off the extra disk space for the move. I tried a product that claimed it could do the move without this step, but to no avail.

If you remove/disable everything I mentioned above, you probably just need to run the defragmenter (“Optimizer”) via the Properties of your drive, Tools tab, Optimize button.

Verify and investigate with Disk Management and Event Viewer. You can check to see if you’ve done it by running Disk Management (run: diskmgmt.msc) and selecting the Shrink option for the partition. I couldn’t actually shrink this way, but the reported new size told me whether I had done enough to fit.

When I was nowhere near small enough, I was able to open the Event Viewer (run: eventvwr.msc), go to the Application log, and filter for event ID “259”. This event would let me know what unmovable file was getting in my way.

Portion Control

Resize the source partition and create target partitions. I ran GParted via a live USB (the first of three needed). I shrank my source partition in no time. I also set up the target partitions on the SSD.

Here’s where I learned something (over many hours): I needed to copy over the 100MB “System” partition too. No joy without this, for whatever reason. Nobody told me.

Transplant and Recover

Create backup images where practical. Running Clonezilla from the second live USB was a little scary, but not a problem. I created images of the System partition and also HP’s restore partition (for Windows 7), just in case.

Copy the partitions via Clonezilla.

Run Startup Repair. I disconnected my source hard drive before booting with the Window’s recovery drive to perform startup repair. When I finally did this with the “System” partition in place, all was well.

Double check, then wipe old partitions. Once successfully booting, I shut down, reconnected the hard drive, and used GParted again to wipe the old partitions and create a new one.

Undo your temporary measures. With all as it should be, I closed my computer case, booted, and began undoing my drastic actions (as space became available) of disabling paging and swap, compressing the drive, disabling system protection, and disabling indexing.

New Lifestyle

Move libraries off the SSD. Find your various libraries (such as Documents, Downloads, Pictures, etc.), right click on one, choose Properties, then on the Location tab click on the Move… button. You can then choose to move the folder to reside on the old hard drive.

Move cloud-synced folders off the SSD. While Dropbox lets you move the folder through the Preferences (Account tab), Google Drive requires you to disconnect and reconnect your account in order to choose a new location. Each cloud-syncing service will have their own way.

Install fat programs to the hard drive. So far, the programs I’ve reinstalled have been happy enough to be told to install somewhere other than the default location.

Migrating Windows 10 to a smaller SSD

Definition of Done vs. Checklists

If you look at my first offering of my team’s “Definition of Done,” you’ll see that it focused almost entirely on the tool (Team Foundation Server). While the items are important to remember, they aren’t agreements made by the team, but rather “the right way” to use the tool. As such, those items really belong on a checklist, not in the Definition of Done.

Here is the cleaned up version.

Definition of Done

Requirement

  • Follow the “TFS Requirement Checklist”
  • Definition of Ready
    • The “Requirements Review Checklist” has been answered.
    • The team has read the user story and acceptance criteria and has agreed on a size via planning poker.
  • Done
    • The requirement has been met.
    • The following are handled gracefully:
      • No results
      • Null values
      • Maximum-length strings
    • Reasonable input validation is implemented (see “Development Expectations”)
    • Wiki documentation updated if applicable.

Bug

  • Follow the “TFS Bug Checklist”
  • The Bug has been addressed:
    • Path 1: Bug has been fixed in the data or environment and no longer occurs.
    • Path 2: Bug has been fixed in the code and will not occur after a new build (nightly).
    • Path 3: Bug cannot be reproduced, needs more detail, or reports as-designed functionality.

Task

  • Follow the “TFS Task Checklist”
  • Task is Accomplished
  • Unit tests updated and pass
  • Code review completed, if needed

Sprint

  • Sprint Retrospective
  • Team Sprint Review
  • Internal Sprint Review (cross-team functionality)
  • “Sprint Review and Approvals” form drafted.
  • Customer Sprint Review and “Sprint Review and Approvals” form signed.
  • “Release Notes” draft updated.

Release

  • Ready for Regression Testing
    • All new functionality has been coded and tested.
    • All code checked in.
    • All requirements and bugs included in the release should be in a closed state in TFS.
  • Done
    • Regression Testing complete.
    • All Change Requests in the release are Closed (in cooperation with the Change Management team).
    • All Requirements and Bugs in the release are Closed.
    • All documentation included in the release package submitted (“Change Management Team checklist”).
    • Informational meeting and notes provided to the Help Desk, Documentation Team, and Trainers.
Definition of Done vs. Checklists

Reminder to self: Clean up Windows 7 temp files

I’m pretty sure I’ve freaked out about this before and solved this before. And then forgotten.

Note to self #1: run WinDirStat as Administrator to figure out the identity of the “Unknown” bytes taking up all your free disk space. Anything that requires Administrator rights to read will be “Unknown.”

Note to self #2: clean out “C:\Windows\Temp” every so often. That was eating up tons of space.

Reminder to self: Clean up Windows 7 temp files

Example of using AutoHotKey instead of a batch file

On Super User I addressed a question about using a batch file to create a 7z file (like a ZIP file) of each individual file within a directory. That’s painful when you’re restricted to a Windows batch file, but once the questioner mentioned that AutoHotKey was an option, I thought:

PROBLEM SOLVED

AHK’s StrLen and SubStr can extract the variable portion of the file path. The file loop will recurse through all the files in the source directory. And then it’s just a matter of using RunWait to pass the paths to 7-Zip. The “,, Hide” specified at the end of the RunWait tells it to hide the command windows spawned.

InputBox, password, Enter Password for Archives, The generated archives will be protected with the password you enter below. Your input will be masked., hide
; Using FileSelectFolder is just one way of choosing your folders.
FileSelectFolder, sourcepath,,, Source Folder
sourcepath := RegExReplace(sourcepath, "\\$") ; Removes the trailing backslash, if present.
FileSelectFolder, destinationpath,,, Destination Folder
destinationpath := RegExReplace(destinationpath, "\\$") ; Removes the trailing backslash, if present.
sourcelen := StrLen(sourcepath) + 1 ; Determine the start of the variable part of the path.
Loop, Files, %sourcepath%\*.*, R
{
varfilepath := SubStr(A_LoopFileFullPath, sourcelen) ; Grab everything to the right of the source folder.
RunWait, "c:\program files\7-zip\7z.exe" a "%destinationpath%%varfilepath%.7z" "%A_LoopFileFullPath%" p"%password%" t7z mx0 mhe mmt,, Hide
FileCount := a_index
}
Msgbox Archives Created: %FileCount%`nSource: %sourcepath%`nDestination: %destinationpath%

I added in some pretty easy (in AHK) UI elements to enter a password, select the source and destination directories, and to be notified when the operation finished.

Note that you need v1.1.21+ of AHK or above for the file loop to operate as written.

Enter a password
Enter a password
Folder Selection
Folder Selection
Completion Summary
Completion Summary
Example of using AutoHotKey instead of a batch file

Efficiency Order vs. Value Order

I’ve been meaning to write a comparison of Waterfall and Agile software development, as well as the truths in the PMBOK. Until then, I just want to point out one key difference: efficiency order vs. value order.

This is my own terminology, but “efficiency order” and “value order” seem to be the two most reasonable responses to the question, “In what order will work be performed?” Work can be ordered according to the value generated, such that the highest priority work is accomplished first. Or work can be ordered according to the sequence with the least waste. Value order puts building a submarine hull before installing sonar, since submariners can live without sonar, but not without a hull. Efficiency order might install sonar before the hull is complete if doing so requires less labor and improves the schedule. Continue reading “Efficiency Order vs. Value Order”

Efficiency Order vs. Value Order

PMBOK (5th Edition) Inputs, Outputs, Tools, & Techniques

Going through the various processes in the PMBOK (5th Edition), I’m realizing that what I really need to pay attention to isn’t the list of processes, but what is involved in all of them. The repetition of the inputs, tools, and techniques is not how I like to think about these things. So I’m putting together the index below. This is mostly for my reference, but I’m putting it out into the Internet in case anybody else stumbles across this in their search for information on the PMBOK (5th Edition).

pmbok-guide-5th-edition

Continue reading “PMBOK (5th Edition) Inputs, Outputs, Tools, & Techniques”

PMBOK (5th Edition) Inputs, Outputs, Tools, & Techniques

2, 24, 8, 11, 2 and 6, 6, 7, 4, 3, 4, 3, 6, 4, 4

I’m taking a course right now to prepare for the PMP exam. One of the things we’re working to memorize is the categorization of 47 processes into knowledge areas and process groups. Those categorizations form a grid. Memorizing the number of processes in each column and row really helped my table recreate the grid from memory and impress the instructor.

In addition to those arbitrary sequences of numbers, I have a few additional observations about the distribution of processes according to PMI:

The Initiating process group includes only two processes: developing the project charter (authorizing the project) and identifying stakeholders. These make sense, since before the project manager even gets involved, somebody figured out that they wanted the project to occur, and they had some idea about who is involved in some way. Once the project manager has these in hand, the project planning can start.

The Closing process group includes only two processes: closing project work and closing procurements. The distinction here is between the value generated by the team itself and the value acquired from an outside source.

There are four knowledge areas that have nothing in the Executing process group, and they are the ones that are essentially the project constraints and risks to them: Scope, Time, Cost, and Risk. These each have a large number of Planning processes and one Control process. Scope has the extra validate process under Monitor & Control to ensure that the project resulted in what was promised.

The one knowledge area with no Monitor & Control process is the Human Resource one. This is presumably because monitoring and controlling is about ensuring that your personnel are accomplishing the project as expected within the specified constraints, and thus one in this intersection would be redundant.

Each of the knowledge areas that have at least one Executing process have only a single Planning process: Integration, Quality, Human Resource, Communications, Procurement, and Stakeholder.  Half of these are about the actual accomplishment of work (HR, Procurement, and Stakeholder). The other three are the core of what a project manager does: coordination (Communications), management (Integration), and oversight (QA).

More memorization tomorrow. Good night.

2, 24, 8, 11, 2 and 6, 6, 7, 4, 3, 4, 3, 6, 4, 4

Another blank vs. null SQL story

Abbreviated root cause analysis:

1. A nightly update job began failing after a new release because the unchanged job was not built to handle the data found in the production environment.

2. The static data (location information, such as US states) in production was changed with the release, but not any of the dynamic data updated by the job (personnel information, such as names and addresses).

3. The dummy dynamic data used in the testing environments worked with the changes to the static data, because the dummy data was created with the assumption that all personnel have some sort of location entered (non-nullable field), even if it doesn’t match our list of recognized locations.

4. The static location data contains an abbreviation field which was left blank for new, special-purpose locations stored in the same table but irrelevant to the personnel data referenced here. The abbreviation field is also non-nullable.

2015-07-15 16_07_25-Instant SQL FormatterYou now have all of the pieces.

5. The production data contained surprising, long-standing examples of personnel with blanks for their state abbreviations. For the first time, the static table contained records that also had blanks for their abbreviations (since they don’t really have one). The update query joined the abbreviations, and every person with a blank matched all of the locations with a blank, instead of the one unique state. If either or both of these had been null instead of blank, there wouldn’t have been any crazy matches, as nulls don’t join.

Unique ID Name Abbreviation Expanded
111387 Dane Weber VA Virginia
111388 Rumpel Stiltskin Special Region Alpha
111388 Rumpel Stiltskin Special Region Beta
111388 Rumpel Stiltskin Special Region Gamma
111388 Rumpel Stiltskin Special Region Delta

 The remedies:

  1. Use null rather than blank when you mean that there is no such information.
    • This applies to the dynamic data.
    • This applies to the static data.
    • This is a cop-out remedy, because it translates into “don’t make mistakes.”
  2. Require that the static table’s Abbreviation column only contain unique values.
    • This would have been a good change to make when first writing the nightly update job.
    • This guarantees that there can only be a single match for a blank abbreviation.
    • While the match is probably not what you want for the personnel record, the developer adding the special-purpose locations would have bumped into the restriction which would have lead to the right questions being asked much earlier.
  3. Test against a copy of production data.
    • This is a great solution for finding errors before they hit production.
    • This is a no-go when you do not have a testing environment as secure as your production one.
  4. Analyze production data for the range of values present in each column. Generate test data that includes the full range of values.
    • This is a good practice in general, assuming you can’t test against real data.
    • This will help catch other, unforeseen errors.
    • There is a real cost to spending time on this kind of analysis and data generation, and that requires the will of management.
Another blank vs. null SQL story