ASTER GDEM NUMBERS & ACCURACY

WHERE AND HOW

Download USGS Satellite Imagery at http://earthexplorer.usgs.gov/

Create account and either search or click map of desired area - feature.

Enter date range. 2000 - to today.

Height maps = Digital Elevation> ASTER GLOBAL DEM

High Res Colour maps = Aerial Imagery>High Resolution Orthoimagery

Click ‘Results’.

‘Show Browse Overlay’ displays a preview of the satellite image on top of the world map

‘Show Metadata and Browse’ contains dates, coordinates, etc.

THE GeoTIFF

Tile size = 1° lat x 1° long

1° x 1° = 111.699km at equator 110.567km at poles.

average = 111.133km

3601 columns x 3601 rows

pixel size = 1 arc second

1 arc second = 30.87m (? I guess this also changes)

3601 x 30.87 = 111.16287 km (? this number comes out a bit above the average)

*Either stick to average or use Area by Lat Long coordinates for exact measuments)

16bit TIFF with 65,536 levels of grey and -9999 levels for voids (Ocean, caves etc)

Sea level = 9,999 grey.

So for land elevations relating to sea level need to add 15.257% to remap accounting for voids. (9999/65536x100)

Inherent 95% accuracy with 17m degree of error in height.

Which allows for a small % to add or reduce elevation in Houdini etc.

MOUNT WILLIAMSON - Ansel Adams - recreation

Mount Williamson by Ansel Adams. The inspiration.

MetaShape - 300x 4k images from Google earth - Medium dense point cloud - *If I set it to high after a day my machine just freezes lol.

Houdini - geoTIFF - way larger than what I need.

Crop out Mount Williamson.

Transform back to center. And Modify menu - Move pivot to center.

Then scale up to match reality - and arbitrary move TY back down to zero. ***Nope got it wrong see later posts.

Now trying to line up MetaShape geo with my scaled geoTIFF terrain.

Aligned - geoTIFF has more verticality than the google earth point cloud geo.

Finally a Ansel camera match and lineup just on the Metashape geo

Google Deep Dream Experiments

As I understand it Google's DeepDream software uses libraries of images and then looks for those images in whatever image file you give it at various levels of detail based on how you set the Octaves up. And it will iteratively apply the filter again and again to bring out what it thinks it finds.

The first few layers bring out patterns. The middle set of layers eyes start to appear. Then the last set of layer settings bring out dogslugs, sankes lizards, then architecture. But so many cute doggy faces...

So it has a very specific look and style. Not completely random. Why so many dog faces?

Sometimes like a bad photoshop filter but sometimes wonderfully surreal.

How can I use it effectively?

'American flag' test. (Could of used any national flag but this is inspired by Hunter S Thomson's "Fear and Loathing in Las Vegas". Deep Dream and the American Deep Dream just fit so well together.)

'Uncle Sam wants you' test. Eyes. Again seems to fit well with the revelations of how we are all now being watched 24/7.

Ok, what if I give it nothing... just white pixels to dream on?

Wow! Was surprised it worked as the image is pure white with zero noise variance.

Ok what about black?

Again totally unexpected. The cuteness has become quite evil and disturbing.

And finally 50% grey for all the concrete everywhere.

can phone x 3 intercept

re: Joseph Beuys "Telefon S_E"

Add a third string intercept in middle with black can - black string.

Train to Pluton

Dark Star Dark Matter Dark Net Invader Dark Echo Ghost Return Isolate Lie Down in Darkness Low Hum Onyx We Exist Afterlif Supersymmetry

Here Comes the Night

But when the morning came
You would catch me at the window again
In an eyes wide open sleeping state
Staring into space, with no look upon my face
I was a dreamer
Staring out windows
Out onto the main street
Cos that’s where the dream goes
And when I got older, when I grew bolder
Out onto the streets I flew
Released from your shackles
I danced with the jackals
— http://songmeanings.com/songs/view/3530822107858821708/