Archive for the ‘Photography’ Category

Update 10.10.2018

24 October 2018

It has been a while since I posted here…once again. I don’t know why that is, but it certainly hasn’t been for lack of anything to post.

Photography-wise, most of my focus lately has been on my old 400mm Tokina. Getting it into sharp focus seems to be an almost impossible task. “Almost” because I refuse to believe it is impossible. For the longest time, I’ve been trying to get “focus trap” to work on the lens with the Pentax K3. No matter what I tried, it just wouldn’t work. It worked with the K10 and *ist, so it was frustrating not being able to use that method.

Then I came across something on the internet that made me search with some terms I would never have thought of using. Sure enough, it turned out that with the K3, Pentax created a setting in the Custom menu to allow or disallow that. Not only that, the name of the menu item, at least to me, isn’t intuitive. Sharp Capture. In hindsight, the name does make some sense. So, now I have the ability to use focus trapping.

Even with that, though, it’s still not locking in with the sharp image I remember. Note to self: put the 400mm on the K10 and/or *ist and verify it works like I seem to remember.

I think ,however, that there is a slight difference where the focal plane is compared to the K10 and *ist bodies. That doesn’t make much sense, but right now it’s the only answer I have. Even with the other manual lenses that I remember working properly on the other two bodies, I have to focus past the subject to get a sharp image.

Right now, my plan is to combine focus trapping with the multiple shot mode. That way, when the focus trapping triggers the rapid sequence of images taken while still adjusting focus manually should hopefully include at least one image in which the image is sharp. Or at least sharpest.

We’ll see.

Well, that idea of focus trapping plus multiple shot mode didn’t work. I guess I’m going to have to try and find another way to do this.

The ham radio arena is the next area to bring up to date. Here, the use of the straight key for Morse code as my computer input keyboard has indeed taught me most of the characters. BUT that only taught me to transmit. Recently, a ham who agreed to be an Elmer (aka Mentor) for me sent me a simple device that activates a LED to the tune of the dits and dahs of Morse code.

I bring that up because once I got this hooked into my radio and everything tuned, I had a chance to actually see some Morse via the LED. I was able to, for the first time, clearly and without any of the usual difficulties of differentiating them, see the Morse code. And that segues nicely into I was able to see the dah-dit-dit but I had absolutely no idea what dah-dit-dit stood for. I actually needed to mentally imagine sending dah-dit-dit with a straight key before I could make the connection to the letter D.

And so I found out what I was afraid of actually happened.

I can now transmit a lot of the characters without even having to think about their dit and dah components. The reverse, however, is not true. I could not decipher that visual representation with the same ease I can send it.

Oh, boy.

Now I’m going to have to add a LED to the straight key I use with the computer. There’s a LED on the Teensy board that flashes as I key the characters but it’s under the board holding the key and not visible. Rats! That would have been a perfect solution.

At least it’s a relatively easy fix and since the onboard LED already flashes while keying, I can use that same pinout, so no code modifications needed. I just have to figure out what color LED to use.

Writing. This is where I hang my head in shame.

I’ve done some editing for other authors, but I’ve done precious little writing of my own. As I mentioned last time, I did start a new Pa’adhe story, but nothing past the opening scene and setting up the tale. I have a reasonably decent story for this one, and it’ll provide the backstory for Scarle, but just sitting down and writing just hasn’t happened. With any luck, writing and posting this will get me going on it.

Unfortunately, I’ve been spending a fair bit of time programming. Unfortunately because otherwise I might have been writing instead.

I bought a cheap 3.5” TFT LCD display that came with NO instructions or paperwork at all. It took me several months to finally locate what seemed to be the same display being sold by another vendor with tutorials and examples. So, that’s now up and running on one of my Arduino Unos.

Plus I’m waiting for a part for a 2004 display (20 chars x 4 lines) so that I can use IIC protocols to program it for use with ham radio. This will be a potential display for viewing and decoding Morse code that comes over the air to my radios. It’s intended to just plug into the headphone jack of the radio and display the detected audio and Morse. Eventually I want to modify that to provide the option of showing, selectively, the following: (1) a bar graph or “LED” display of the dits and dahs, (2) a string of dits and dahs such as …. . .-.. —, or (3) the actual translation of the code to display HELLO. Maybe even other modes, although at the moment those three seem to cover all bases for me.

UGH! That’s enough, this is already longer than planned and there’s more like trips. I’ll save those for another time.

Advertisements

A Return to Stereograms

4 April 2018

I have mentioned working on stereograms previously. These last couple of weeks have seen me focused on them.

Stereogram created from drone video taken at Wickahoney. See text for details.

Most that I have done before are close-ups, if you will, or portrait oriented.

I wanted to play with stereograms some more, this time focusing on landscapes. My goals were, first, to get them working consistently and second to hopefully work out any rules unique to stereograms.

In my mind’s eye, I remember sitting on the floor at my grandparent’s with a big box of stereograms and a now antique viewer. I would pick out a card with it’s two slides, read the caption, drop it in the holder, and clap the stereoscope to my face. I remember being fascinated how I could see a 3D version of a scene and how it contrasted from the pictures on the wall.

More, I remember most of them were landscapes.

Now, I’ll grant you my memories of those images are likely rose colored by time and they may not have been as fantastically 3D as I seem to remember. Most indubitably, though, there were hundreds of landscapes and not so many of flowers, people, or objects.

My goal in this recent project was to create valid 3D landscape stereograms. I also needed to work out what the limitations were, and how best to create a pleasing image that was also 3D.

Like this one:

One of the spots that overlooks Swan Falls and the Snake River Canyon, looking downstream from the dam.

Or this one, where the red rock formation just pops out at you:

Looking at Swan Falls Dam and the Snake River Canyon. The red rock is very prominent in the foreground.

What are the rules?

Aside from standard landscape photography composition “rules” I felt there must be some additional guidelines that would drive the composition.

As it turns out, there are, and there aren’t.

One of the things that you need to keep in mind when creating stereograms is:

  1. Take a picture of your subject. Remember where the center of your picture is on the subject.
  2. Take a step to the left. I usually stand with my feet just more than shoulder width apart. After taking the first picture, I move my right foot to touch my left foot then move the left foot so I am again standing with feet apart.
  3. Aim the camera at the exact same point on the subject as before.
  4. Take another picture.

That is my way of getting paired, handheld pictures. The first picture taken thus becomes the “right” picture (as in taken from the right) and the second becomes the “left” picture. The key to assembling the stereogram is the right picture goes on the left side and the left picture goes on the right side.

You can, of course, do it stepping to the left instead. In this case the first image becomes the left picture and the second becomes the right picture. No biggie, just get into the habit of doing it the same way each time.

There’s times a question arises whether or not the middle and far parts of the photograph actually show as 3D. Sometimes they do, sometimes they don’t. Sometimes they work if you have some decent foreground detail, other times you don’t need that foreground to make it work.

Willow Creek, off Black’s Creek Road. Notice the apparent differences in 3D impact in this compared to the Swan Falls stereograms.

And then, there’s the issue of anything that’s moving…that’s likely to produce “ghosts”, faint or translucent objects in the photo. Your main scene, the one you want to see in 3D, has to be still. Trees moving in the wind, clouds passing by overhead, cars on the road, people moving…all those and more need to be avoided.

One way to avoid natural movements such as clouds and water ripples is to use a long exposure time. That way, things get “smudged” smooth. Ripples on water, for example, become a soft flat surface and clouds become featureless.

Interestingly enough, I have one stereogram (below) where the two angles are such that one shows the parked truck and the other doesn’t, and yet the truck is solid in the stereogram view. It’s not a translucent ghost due to being in only one of the paired images. Yet another stereogram I’ve done freezes a car in one picture but it’s not in the other and this time it shows as a ghost car. Go figure. That’s what I mean about “there are and there aren’t additional guidelines.” More likely I haven’t figured them out yet.

Notice how the black truck is in one image but not the other, yet still comes through in the stereogram as solid, not as a ghost image.

By the way, the Wickahoney stereograms were all pulled from a video created by orbiting my DJI Phantom 4 around the midpoint of the ruins. You do remember that a video is merely a string of still images played back rapidly? Each pair, in this case, were pulled from about 1 second apart, e.g. one would be from 13 seconds into the video, and the second of the pair would be from 14 seconds into the video. When doing this, creating a stereogram from a video, you want to be sure the video still image isn’t blurry due to the drone moving too fast, to continue this example.

Stereogram from Wickahoney drone video.

One thing I did discover is that if you use a zoom or telephoto lens to enlarge something in the distance and make it part of your foreground or the middle distance in the photograph, you have to displace the camera location much more than a single step to one side. A problem I encountered was I could properly displace the distant solitary tree but the mountains behind it shifted significantly. They shifted enough that even though I could get the tree to be reasonably 3D, the more distant mountains were blurry.

A wide angle lens, though, works great and lets you really bring in some foreground:

Snake River Canyon from an overlook at Swan Falls, looking upriver from the dam.

And that’s as far as I’ve got. I’ll be going out and shooting more landscapes, as well as some closer subjects.

I think I know how to apply this technique to video as well and plan to try it with the video used to make the above Wickahoney stereograms. That’s for another time, though.

Astrophotography Workflow

12 October 2017

Since the last time I blogged about my astrophotography tools things have changed somewhat. I thought I would write up my current workflow, without making it as much an app tutorial as I did last time.

For starters, I no longer use The Photographer’s Ephemeris. Ever since it moved to online only, it’s been pretty much useless to me in the field. It was a great program, and still is, for laying out sight lines, times, and such. Unfortunately, with it being online only, I can’t use it in the field to work out things.

I pretty much rely now on two phone apps and a computer program:

Stellarium: In the field on a laptop or at home, this is my favorite planetarium program. It lets me see what the sky might show on any location, date and time, and conversely allows me to see when a particular sky object might be where I want it for a photograph. Plus it’s great for finding your way around the night sky on site.

Dioptra: An Android only app, it illustrates the adage, “a picture is worth a thousand words.” With it, I can record in one image the desired view from that location, the actual GPS coordinates, and the compass bearing of the view. It records a few other details as well, but those are the ones I focus on.

Sun Surveyor: Available for both Apple and Android, I use this mostly on-site. Its Live View allows me to see the paths of the Milky Way, sun, and moon through the sky superimposed on that location. It’s useful in allowing me to get everything aligned on that spot and ready to take pictures before dark.

Yesterday, I went into the Owyhees with the goal of scouting a location. As can be seen later, the location doesn’t align for the planned photo shoot any time soon but using Stellarium I was able to identify a different possibility that I could take advantage of.

What I list below is pretty much my usual workflow.

Generally, I start out with an idea, which for some reason seems to tend towards shooting from in a canyon to frame the Milky Way or a planet or constellation between canyon walls. This time, I was thinking “Milky Way above Succor Creek.” I know the Succor Creek picnic area is in a narrow canyon (see what I mean?) and there is a bridge that crosses over the creek there. So, off I go into the Owyhees.

A cowboy, in baseball cap, chaps, jeans, jacket, on a brown horse herding three cows and a couple calves under a mostly white cloudy sky alongside the dirt road in the Owyhees.

A working cowboy herding cattle in the Owyhees.

After a relaxing drive, I arrive at the site. I take my camera bag out on the bridge, and position myself centered over Succor Creek. After first turning on the GPS, I pull up the Dioptra app. Once it’s open and I verify I have a GPS lock I wave my phone around in the classic 3 figure 8s to calibrate the compass.

Hmmm. Just had a mental picture of me in a hooded cloak, mystically waving my arms to summon Magnetic and command him to calibrate my compass.

Anyway….

The next step is to simply point the phone camera for the view I want and take a picture. The app then records that view overlaid with all the necessary “notes” I need to work with Stellarium.

A view up a creek with heavy growth of small trees, brush, and grass on both sides. In the far distance a butte sticks up from the horizon visible between the brush, aligned with the center of the creek. Partially white cloudy sky, mostly blue sky. Superimposed is various information from the Dioptra app like a heads-up display.

The output of Dioptra at the bridge over Succor Creek, Owyhees, Malheur County, Oregon.

While it’s hard to see, in the center of the image is a reticle that gives you an aiming point. I usually only use that for direction alignment on some landscape feature that I might want in the end image. Under that is a compass bearing, in this case 138° which is the direction of interest, straight up Succor Creek. Luckily for me, that distant butte is in line with the creek. In the upper left is the latitude, longitude, and altitude of that spot on the bridge. At the bottom is the compass direction. The two angles on the side are useful for getting the camera perfectly level but in this situation I don’t really care about those.

As you can see, one of the current issues with Dioptra is the use of white text and no way to change that. Hopefully, the programmer will be adding an option to change the text color in the future, but for now there are some workarounds. For example, you can change the camera angle to put the text onto a darker background and take a second picture. Or put your hand over the lens. That gives you the first picture showing you the planned view orientation and a second picture that ensures you get all the necessary information.

Similar to previous Dioptra image, but from the middle of a dirt road. Back half of a blue-green blazer visible to left, steep redish brown cliffs to either side of road.

Another Dioptra image, this time on the road to Succor Creek. Note the better visibility of the information upper left.

This is a Dioptra shot at another location. Notice the center information is almost completely lost in the white cloud but the information top left stands out quite nicely. It’s hard to see, but this straight run of the road lines up on 162°, a bit more towards the south and an alternative which would give me those distant rock fingers reaching to the sky.

My next step is to take a few shots with the camera and lenses I am considering using. In this case, I took an image at each end of two zoom lenses, my fisheye (10-17mm) and my regular 18-55mm. I usually use the fisheye for my astrophotography but it’s useful to try the other lenses as well. Sometimes the framing in a different focal length just works better and if you don’t check that, you won’t know that.

Succor Creek test photo, 17mm focal length.

Succor Creek test photo, 10mm focal length.

Succor Creek test photo, 35mm focal length.

Back at the house, I pull up Stellarium on the computer. Using the location function, I enter the latitude, longitude, and altitude. Next, I move the view to the desired compass bearing. I can also set the field of view to match that of the lens I plan to use but I tend to leave that at the default setting unless it’s a site I use regularly and have a landscape for.

Pulling up the time function and setting it to 2300 tonight, I saw that the Milky Way wouldn’t line up with the creek…at all. It would be coming up over the canyon wall to the right. The 10mm focal length image above does show that I could get a decent shot with the fisheye and still be able to have the creek in the image. The creek wouldn’t be going down the middle of the image, though, if I really want to maximize the Milky Way. That creates a potential line that guides the viewers eye away from the Milky Way.

Not good. At all.

So, now I start clicking on the day in the time function, advancing roughly 24 hours per click. As I watch the screen, I notice the moon goes across the scene regularly. A bit of playing with the time and date shows that I could possibly get a shot of the moon high over the distant butte. The creek would guide the eye to the butte and the butte would point up to the moon. With the right moon, Succor Creek would be a ribbon of silver. That’s a decent possibility.

Advancing day by day again, I come up with a shot that doesn’t have the Milky Way, but does have Orion over the distant butte. Hmmm. That’s another possibility. The moon would be up, but hidden by the left cliff wall. The date says 2017/12/6…December 6th. Depending on the weather, that might be fun.

Stellarium, showing Orion above the butte. Succor Creek would be visible vertically in the lower 1/3 center (as seen in previous pictures).

Finally, I get the northern part of the Milky Way aligned…on 2017/12/27. Not as impressive as the main body of the Milky Way, but a possibility. I really want the center band, though, so I continue advancing…to 2018/06/11. Sigh. All the way to next June before I can get that shot.

Hey, Saturn’s there, too, and pretty much right over the butte!

Imagine the 10mm image of Succor Creek above with the Milky Way over it. Saturn would be directly above the butte.

So, now I have a few dates for images that might work at that Succor Creek bridge location. I know what to expect, where to aim the camera, which lens I will probably use, how early I have to be there, and how late I’ll have to stay. It’s a good opportunity to just go camping, too, knowing I’ll have some neat pictures of the night sky as a result.

If the weather cooperates.