This blog allows me to disseminate knowledge I think should be readily available. If I find myself ranting, I try to make sure it's informative, and not completely subjective. I try to offer solutions--not problems--but in the case I do present a problem, please feel free to post a comment.
2.5" SSDs are dense enough and cheap enough now that it makes sense to replace 2.5" HDDs. I have been using a M.2 NVMe SSD in this laptop for GNU+Linux, but I still had Windows on the original hard drive. However, whenever I had to boot into Windows, I was reminded just how slow Windows and hard drives are.
First, I used the dd command to mirror the one hard drive to the other. I used a StarTech USB 3.0 to SATA converter. I unmounted the hard drive, then ran dd, which took the better part of a day. Then, it was time for the physical replacement.
Before
The old hard drive weighed in at 89 grams. The new SSD weighed 38 grams, less than half the weight.
After
The real savings should be in power and response times. I tested that Windows still boots, and it only took several seconds. I did not measure the boot times, but it seemed much faster. Time will tell.
I wanted to make sure that it still makes sense to use Blu-ray Recordable (BD-R) discs for archiving old files. So, I got several samples of storage prices from Amazon and computed the cost per gigabyte for various available sizes of several types of storage.
Name
ASIN
Density (GB)
Count
Price
$/GB
Blu-ray Recordable
Verbatim BD-R 25GB 6X Blu-ray Recordable Media Disc - 50 Pack Spindle - 98397
I wanted to practice setting up btrfs on Ubuntu server. My requirement is nightly backups retained for 30 days. I started with a 10GB virtual disk on a VirtualBox VM. I partitioned it with the following table:
I had to mount the btrfs main volume under /mnt/btr_pool to the fstab: ryan@ubuntu:~$ cat /etc/fstab # /etc/fstab: static file system information. # # Use 'blkid' to print the universally unique identifier for a # device; this may be used with UUID= as a more robust way to name devices # that works even if disks are added and removed. See fstab(5). # # # / was on /dev/sda2 during installation UUID=6ae1fc17-428e-4446-941c-f478c71b9cfd / ext4 errors=remount-ro 0 1 # /boot was on /dev/sda1 during installation UUID=683a6648-1598-44d3-930e-62aba3b8a4a5 /boot ext4 defaults 0 2 # /home was on /dev/sda3 during installation UUID=50e15cf1-f4d8-4ac3-9116-dfea86eb7c33 /home btrfs defaults,subvol=@home 0 2 # swap was on /dev/sda4 during installation UUID=ace0597d-3b9f-45ca-bab2-9a49ff6dbe51 none swap sw 0 0 # mount btrfs for backup UUID=50e15cf1-f4d8-4ac3-9116-dfea86eb7c33 /mnt/btr_pool btrfs defaults,subvolid=0 0 0
/etc/cron.daily/btrbk:
#!/bin/sh
exec /usr/sbin/btrbk -q -c /etc/btrbk/btrbk.conf run
After creating some directories, the backups appear as: ryan@ubuntu:~$ ls /mnt/btr_pool/btrbk_snapshots/ @home.20171022T2149 @home.20171022T2216 @home.20171022T2223 @home.20171022T2151 @home.20171022T2216_1 @home.20171022T2229 @home.20171022T2152 @home.20171022T2216_2 @home.20171022T2231
With reference to this earlier post, I wanted to make a comparison of a Nikon 18-200mm lens and a Sigma 150-500mm on a D90.
18-200mm
150-500mm
Wide
Telephoto
So, the widest the 150-500mm lens will go is only a little wider than the narrowest the 18-200mm lens goes.
The narrowest the 150-500mm lens goes is much narrower than the 18-200mm.
For a long time, I held this proprietary, but these days it seems like everyone and their brother (no offense to mine) can make panoramas, so here it goes.
For this demonstration, I used a Nikon D-90 and the Nikon 10.5mm Fisheye lens. The first step is to set up and fix the camera. I put it into manual mode and then use autofocus to focus to infinity using something on the horizon. I then set the camera to manual focus so that it won't change. When shooting outside, I will usually use the "sunny 16 rule" and start setting up the exposure by setting the aperture to f/16. Then, with the camera pointed at the horizon or something interesting in the perspective, I use the exposure meter in the diopter display to balance the exposure. I then enable +/- 3EV exposure bracketing, so the camera will shoot nominal, an under exposure and then an over exposure. This step is not strictly necessary, but it makes it possible to correct the specific exposure of regions later. Although I sometimes forget, you must set the white balance and ISO rating while in manual mode. I usually set the white balance to "sunny" or "cloudy" and the ISO to 200. Recently, I've been using the high-quality JPG format for output. It is more economical than RAW format and in JPG mode the camera applies certain lens corrections to the output such as chromatic aberration.
Then I set up the Nodal Ninja with the 45-degree detent ring (8 positions) and I first shoot a row at a pitch of 30-degrees upward. Once I've gone all the way around, I adjust the pitch to 30-degrees downward. At each position, I press the shutter release three times because I am doing exposure bracketing. If there is something interesting below the tripod, I will fold up the tripod and hold it out to take 3 additional pictures of the nadir using the remote shutter release. The video below demonstrates the sequence of positions and the 48 (16*3) pictures that result.
The next step is to load the images onto the computer and to align them with eachother. I have some scripts that automate this process as demonstrated in the video below.
The script takes the images, by the order in which they were taken, and applies the appropriate pitch and yaw in the resulting Hugin PTO file. It also determines the EV value, focal length and FOV of each exposure and sets that in the PTO. The script also runs a control point generator on each pair of adjacent images in the graph below. Essentially, the two rows (actually, cycles) of +/- 30 degrees are connected horizontally, each of the horizontal positions in these rows are connected vertically, and finally the additional under- and over-exposure images are connected to the nominal exposure image. This drastically reduces the amount of time required to create control points and the number of false control points.
At about 2:00 into the video, I launch the panorama previewer. This allows you to see a coarse rendition of what it will look like. The exposure is set to 0 by default, so I click the 'auto EV' button to set it to the average exposure. The pictures are shown in the positions set by the script. The Nodal Ninja and tripod can move slightly, and therefore there will be some error from the planned angles. To correct this, we use the control point optimizer, which also sets the parameters of the camera model to render the images onto the equirectangular projection with low error. To set up the optimizer, we want to allow it to vary the yaw, pitch and roll of all the frames except the first one or any nominal exposure that is level. This will fix the position of that photo as an anchor. For my camera, I allow the tool to optimize the view (v), barrel (b) and distortion (c). I also enable the x shift (d) and y shift (e) to be optimized if necessary.
An iterative process follows whereby I optimize, remove outlying control points, add additional control points, and (sometimes) tweak the optimizer settings until the maximum distance between control points is less than 0.7 pixels. In the fast panorama preview window, you can "Show control points", which allows you to see where you have false control points or where objects have moved during shooting. This is typical of clouds as is seen in the video. The tripod should also be ignored because it will be removed in the final result. Remove any control points on the tripod or panoramic head. Alternatively, you could use a mask before you optimize, but I haven't yet integrated this into my tools. This iterative process is genuinely carried out in the video (no smoke and mirrors).
The next step is to choose a projection and output size. The Hugin tool does a great job of computing the optimal size for your output. I use the equirectangular projection for my output. I will sometimes choose a small size (2000x1000 pixels) for my output to check for major errors. Not shown in the video is the type of output you want to use. Under panorama outputs, check "Exposure fused from any arrangement", Format: "TIFF", Compression: "Packbits". Under remapped images, check "No exposure correction, low dynamic range". Under Layers, check "Blended layers of similar exposure, without exposure correction". For remapper, I use Nona, enfuse for image fusion and enblend for exposure blender. To save space, I check "Save cropped images" under the remapper options.
The next step in the process is to make corrections to the remapped images. This is demonstrated in the video below.
In this image, I had a subject that moved around and wanted to fix that first. I use the extra outputs that I asked for in the Hugin stitcher tab to make put my subject in one piece. In this particular output process, the exposure was slightly different in the resulting panorama than it was in any of the individual exposure layers. Ideally, one would create the three exposure layers as complete panoramas and then blend the result together. Unfortunately, my subject moved inside the exposure bracket, so I had to blend the exposure myself with the GIMP. There is a slight aura in the result which could be fixed with more care. I was hasty for the sake of keeping the video short.
The following video demonstrates how the remapped images are positioned. The tool can form a complete panorama for each exposure setting, and then blend these together. The exposure blending process essentially starts with the nominal expsoure, fills in highlights from the under exposure, and fills in shadows from the over exposure.
The next thing I do is to is remap the final blended image that we just fixed onto a cubic projection. This is done with the erect2cubic tool. The point of this is that it is easier to fix the zenith and nadir in the cubic projection. There is sometimes a dark spot at the zenith and the tripod is seen at the nadir. The process to remove these is shown in the video. If you took additional pictures for what is below the tripod, this can be integrated into your output in a tutorial to come. Usually, you can just replicate the ground around the tripod to cover it up.
I then map these images back to the equirectangular projection. The final output is ready for uploading after a couple checks. I usually look around it closely for fusion errors. I usually check the histogram to make sure I am using most of the exposure spectrum. Then, I choose a JPEG quality to make sure that the output is less than 25MB. A low-quality JPEG is shown below:
I built a new system including an ASUS P9X79 motherboard. Wake on LAN (WOL) was just not working for me, and a setting that would enable it did not seem to exist in the BIOS (well, the EFI). There's essentially no explicit documentation on how to enable this feature in the owner's manual. Online, I found many people struggling with the same problem on older ASUS motherboards: they seemed to fix the problem by enabling the Intel LAN PXE Boot ROM (LAN PXE OPROM)---this is not necessary! The PXE ROM is for booting over the LAN, not waking in response to a magic packet. There is also a distractor called ErP Ready which seems to enable more power features when the system is in a standby state, but also seems to be mutually exclusive of other power-on options.
The only setting you need to enable is Advanced \ APM \ Power On By PCIE/PCI. This makes sense because the Intel LAN controller is almost certainly connected as a PCIE or PCI device. I was able to wake the system from the off state using the wakeonlan program (a perl script) from MacPorts, even over WiFi.
These options aren't necessary. The following will also work fine:
mkfs.ext3 /dev/mapper/FreeAgent
Mount the encrypted partition
mkdir /media/FreeAgentLuks mount /dev/mapper/FreeAgent /media/FreeAgentLuks
Transcript
This is something I did before, following instructions from somewhere.
[root@exciter ~]# badblocks -c 10240 -s -w -t random -v /dev/disk/by-id/usb-Seagate_FreeAgentDesktop-0:0 Checking for bad blocks in read-write mode From block 0 to 488386583 Testing with random pattern: done Reading and comparing: done Pass completed, 0 bad blocks found. [root@exciter ~]# cfdisk /dev/disk/by-id/usb-Seagate_FreeAgentDesktop-0:0 Disk has been changed.
WARNING: If you have created or modified any DOS 6.x partitions, please see the cfdisk manual page for additional information.
WARNING! ======== This will overwrite data on /dev/disk/by-id/usb-Seagate_FreeAgentDesktop-0:0-part1 irrevocably.
Are you sure? (Type uppercase yes): YES Enter LUKS passphrase: Verify passphrase: Command successful. [root@exciter ~]# cryptsetup luksOpen /dev/disk/by-id/usb-Seagate_FreeAgentDesktop-0:0-part1 FreeAgent Enter LUKS passphrase for /dev/disk/by-id/usb-Seagate_FreeAgentDesktop-0:0-part1: key slot 0 unlocked. Command successful. [root@exciter ~]# mkfs.ext3 -j -m 1 -O dir_index,filetype,sparse_super /dev/mapper/FreeAgent mke2fs 1.41.4 (27-Jan-2009) Filesystem label= OS type: Linux Block size=4096 (log=2) Fragment size=4096 (log=2) 30531584 inodes, 122095871 blocks 1220958 blocks (1.00%) reserved for the super user First data block=0 Maximum filesystem blocks=4294967296 3727 block groups 32768 blocks per group, 32768 fragments per group 8192 inodes per group Superblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968, 102400000