20150930

This started a while back as a hobby project attempting to produce a better and more realistic non-real-time CRT simulation for print media such as the The Bitmap Brothers: Universe book. It ended with some interesting thoughts about tone-mapping and things which can be applied to real-time CRT simulation as well...

Tonemapping
CRT simulation for low-dynamic-range media is mostly a tonemapping and filtering problem. A CRT has a large amount of black surrounding some very bright phosphors, and even more black for vintage arcade monitors which scan only half the fields but at twice the rate. The challenge: how to maintain overall image brightness and still relay the feel of scan lines and tiny phosphors. It is a challenging problem, for example,

There are some common methods of tonemapping in games and film, which fail on this problem, and a solution which might provide some insight on improving tonemapping for games and film.

The most common tonemapping method for games is to tonemap RGB channels individually. This is one of core reasons a majority of games have serious color distortion in the darks (often over-saturation) as the ratios of the color channels are not preserved. The Dec 2014 version of ACES, which is commonly referenced and applied for game engines has this problem.

Often the next evolution of this idea is to split tonemapping into separate operations for luma and chroma. Typically the chroma is left alone with exception of the highlight compression area where channel cross-talk is added (effectively a desaturation) so that brights over-expose to whites instead of clamping at pure primary and secondary hues. Effectively RGB is converted to XYZ then to xyY, then Y (luma) is tonemapped, then xy (chroma) is pulled towards the white-point in areas of over-exposure. This however suffers from a similar problem as tonemapping RGB channels individually. While the chroma is preserved for a single pixel (better), the extent to which the pixels brightness changes during tonemapping is a function of luma (bad).

Specifically luma is mostly effected by green, followed by a little red, and very little blue. So two different pixels say one with only 0.25 of green, and one with only 0.25 of blue, get tonemapped very differently. The blue hues fall off very quickly, the greens quite a bit later. For CRT simulation this presents quite a problem, because often a source "pixel" is spread across phosphors of pure primary color. When tonemapping luma, the ratios of RGB are not preserved across the phosphor splats (multiple output pixels) which represent a source pixel. This has a net result of making the darks look green as contrast is increased in the tonemapper.

The solution for CRT simulation tonemapping, is to instead tonemap max(R,G,B), then normalize RGB and scale to the tonemapped value. This preserves RGB ratios across adjacent pixels representing phosphors of the same input pixel.

One aspect of CRT simulation tonemapping which does not apply to film or games, is that it is desirable with CRT simulation to avoid adding cross-talk on over-exposure. This way the simulated phosphors maintain at pure hues.

Slot Sharpness vs Loss of Brightness
I built a CRT simulation shader which can enlarge an image 16x in each dimension. Showing stills below up to 16x enlargement and what a zoomed out view would look like. The CRT simulation shader for each enlarged pixel accumulates the effect from the 7 nearest slots per color channel across two scan lines. There are controls for tonemapping (exposure, contrast, clipping), scan-line thickness, scan-line horizontal sharpness, slots per source pixel, slot fall-off (controls slot sharpness), slot vertical gap, and something which can provide a magnification projection of part of the image per slot (hack).

The Metal Slug image example below shows roughly the limit of what can be done under an attempt to maintain slot sharpness at the expense of brightness. The tonemapping is using roughly 4x increase in brightness to work around the slot mask, with 2x the contrast. This tends to have a negative effect on the brights, squashing them.

Settings used in test app: 16 400 200 100 066 040 100 080 012 100

Seeing the full size image (below) shows exactly why there is no great solution here. The extreme tonemapping tends to expand out the very bright phosphors, while the less light ones maintain a most of the feel of an individual slot. But the trade off is a lack of ability to maintain the contrast or brightness of the source image.

Slot Bleeding
This next selection of shots shows another compromise. This time allowing the slots to bleed into each other increasing brightness. To work around loosing the perception of slots, I increased the vertical gap between slots, to keep the perception at least of honeycomb layout of slot triplets. This requires only a 3x increase in brightness, and drop to only 1.5x contrast in the tone-mapper. I used the "magnification hack" setting to push the slots to be mostly filled or not filled. Also increased the horizontal scan-line blur compared to the prior section.

Settings used in test app: 16 300 150 100 066 050 100 060 020 030

Visible Scan-Lines and the Importance Of Adaptive Highlight Compression
This time I believe the shot might be from some version of R-Type? Switching to visible scan-lines requires an increase in the slot-mask density (or CRT resolution). Slot mask is almost non-visible in the zoomed out shot. This example also allows some highlight clipping in the tone-mapping step, which can grab back some brightness and dynamic range. When a large amount of highlight tonality is reserved for large dynamic range which is never used, the result tends to be a reduction of contrast or squashing of the highlights. This suggests for games it is very important to smoothly dynamically adaptively adjust the highlight part of the tonal curve at run-time.

Settings used in test app: 16 300 150 030 025 060 160 040 030 100

Estimating What Modern Games Would Look Like on Vintage CRTs
Just for fun I tried running some Star Wars Battlefront press screen shots through the filter. Certainly no major game would ever ship with a CRT filter like this, but I actually prefer the dropped resolution and slot mask as a way to hide the perfection of high-resolution GPU real-time renderings. The result is certainly way less sharp, but effective super-sampling would still allow for effective aiming (fractional pixel movement changes pixel gradient). Also in my case, the lack of information enables the mind to fill in the gaps resulting in a more believable scene.

The blog might chop these on the right, so click the image for link to full size.

20150915

Minimal Operand CPU ISA For IPC

Dumping some thoughts on CPU design, one way to design for higher IPC (ILP) with something similar to a dual-stack forth machine...

This started from the thought that it might be possible to apply almost zero-operand ISA design to a CPU designed for IPC. Packing either 3 or 4 operations into a single 32-bit instruction word. This form of instruction compression has all sorts of advantages, both in reduction of bandwidth and as a side effect reducing the wires and ports required to implement. Started thinking about a fictional implementation of a 32-bit integer only computer, something which, in theory thanks to a minimal ISA, could be JIT compiled on a conventional CPU via just a lookup table (would be very fast emulation). However in this post I'm deviating from the idea of emulation and throwing in ideas which would be better for dedicated hardware (like using per slot data stacks instead of just registers).

Each instruction is a fixed 32-bits in size.
Each instruction holds 4 slots {0,1,2,3}.
Each slot has an 8-bit opcode.

11111111111111110000000000000000
FEDCBA9876543210FEDCBA9876543210
MSB                          LSB
........................ssssssss 8-bit opcode for slot 0
................tttttttt........ 8-bit opcode for slot 1
........uuuuuuuu................ 8-bit opcode for slot 2
vvvvvvvv........................ 8-bit opcode for slot 3

Slots share the following 32-bit registers,
return stack pointer : R
instruction  pointer : I

Each slot has its own 32-bit registers: slot {0,1,2,3},
top     registers : {S,T,U,V}
2nd     registers : {K,L,M,N}


The "top" is the top of a very short per-slot data stack (which loops around on overflow, and has no memory backing). The "2nd" registers are just the 2nd item on the data stack. This split data and return stack shares a lot of similarities in structure to Color Forth style hardware. The reads of top and address registers of other slots, would see the value from the prior clock. So think of all the slots executing in parallel.

Dedicated address registers provide some interesting properties. Becomes possible to limit the opcode space and operations applied to them. It also becomes possible to auto prefetch cache lines into L0 (something much closer than L1) when the address registers are set. So using half the opcode space for 32-bit load and stores with a literal range large enough for access to a complete cache line. The # below represents how many opcode slots are taken up. NOTE, addressing is by 32-bit word, not by byte! The load pushes the fetched value on the slot's data stack (so 2nd ends up being the old top). The store pops the value off the slot's data stack (so top ends up being the prior 2nd, etc). Grabbing 32-bit literals can be done with one opcode "#", taking the next instruction as data, and advancing the CP. There is a secondary load path which uses a top register as an address. The . is a place-holder for some opcode binary data I didn't feel like flushing out.

ENCODING  ASM   #   MEANING
--------  ----  --  -------
1.....ss  s@    4   [top(s)+i]=top(%)
1.......  #     1   top(%)=next instruction

% = current slot
s = 2-bit slot index
i = 4-bit unsigned immediate


Integer operations can take advantage of slot implied destination and first source, using another slot's "top" as 2nd source or using the current slot's "2nd". Using "2nd" consumes the value from the slot's local data stack. Using "top" from another slot does not consume the value. These support cross slot source reads without taking much opcode space. I left out shifts and other operations, just providing a few examples below,

ENCODING  ASM   #   MEANING
--------  ----  --  -------
1.....ss  s     4   top(%)=src(s)​
​1.....ss  s+    4   top(%)=top(%)+src(s)
​1.....ss  s*    4   top(%)=top(%)*src(s)
​1.....ss  s&    4   top(%)=top(%)&src(s)
1.....ss  s|    4   top(%)=top(%)|src(s)
​1.....ss  s^    4   top(%)=top(%)^src(s)
1.......  -     1   top(%)=-top(%)
1.......  ~     1   top(%)=~top(%)

% = current slot
s = 2-bit slot index
src(s) := if(s==%) 2nd(%) else top(s)


Address register operations are separate from standard ALU ops. Supporting ability to write to another slot's address register unlocks working on address registers in any slot. Setting an address register pops the top off the slot's local data stack. Fetching an address register pushes the value on the slot's local data stack.

ENCODING  ASM   #   MEANING
--------  ----  --  -------

% = current slot
s = 2-bit slot index
src(s) := if(s==%) 2nd(%) else top(s)


Branching can act as a terminator of the 4 opcode packed 32-bit instruction word. So slots after a branch become a nop, and the space is reused for a literal for the branch itself. So 24-bit, 16-bit, or 8-bit, or 0-bit displacement.

Bacon Wrapped Sour Cream

Typical breakfast for a fatatarian. Sour cream from grass fed cows + sharp cheddar cheese + garlic chopped. All wrapped in bacon which was fried in itself but not converted to charcoal like many Americans seem to prefer.

20150914

Tearaway Unfolded PS4 on Wega CRT HDTV

Using an iPad camera to take CRT shots didn't work out at all (severe moire patterns and noise reduction logic getting in the way), so the shots below don't accurately describe what it really looks like...

Tearaway Unfolded on PS4 on these last generation "HD Ready" Wega CRT HDTVs looks fantastic. Absolutely no aliasing at all. The combination of in-game MSAA and 1080p to 720p scan-out scaling in the PS4 works out quite well.

20150911

Self-Correcting Part 2 or Rather a Rant on Why Extreme DoD

A "modern" API effectively coupled to a language: everything revolves around calling "functions", which is a lock-in to a very specific and constrained way of thinking.

What if the API instead is a language agnostic description of data layout, with some protocols (again data layout) for communication (message passing). The "language" need not matter, and is completely replaceable at will. The components which transform the data, aka the nodes in the program's data flow, could be written in any language and are effectively throw-away, replaceable pieces.

The aim is to get to the point where the program is a canvas which is easily and instantly malleable, but still runs at to-the-metal performance, with zero downside to dangerous experimentation --- that instant feedback addiction loop.

GPU provides this natively: dedicated hardware to contain accesses to within resources. Traditionally the data layout is the collection of images and buffers, the format of the data inside those resources, and the rules at which the data can be adjusted. For GPU programming using the "bind-all" style of programming (where there is only one giant descriptor set with everything in it, so that all shaders have access to all descriptors): shaders accessing wrong resources is less of a problem, the larger problem is out of bounds access in a given resource.

On the GPU you can build a pipeline of operations which keeps running even in the presence of bad data. Sure the output may be totally wrong, but it runs. For live compressed video broadcast in the presence of packet loss, you have sweeping I (non-predictive encoded) macro-block(s) which acts as a cleaner which causes the frame to re-converge to correct form when data goes bad. This same concept can be applied to GPU data. Resource caches could have a cleaner which periodically at a slow rate reloads parts of the data from storage. Or with hardware support for async compute, run a very low utilization and low priority background job which rebuilds procedural structures, or validates correctness of various data structures. The robustness of such a system might enable things like partial state saves and then restores from out of sync systems, to work enough to be useful. This also has a relation to maintaining bounded frame rate, designing in ability for the engine to limit itself and scale regardless of input, and perhaps adapt to rapidly varying limits. This isn't really a programming convention for language, it is more about making robust solutions which allow for rapid development.

One of the key components missing from PC GPU APIs is a stable way to mutate the GPU command stream from the GPU, or perhaps this just starts with efficient predication in a given stream: just place everything possible in the baked stream which gets replayed all the time, then use logic embedded in the stream to avoid things which need not run (but with a decision based on active GPU state), and dispatch indirect for variable workloads. This works as long as the kernels/shaders in the graph are constant. When code is specialized instead of data, then the process breaks down. However, specializing code is ultimately what leads to giant bloatware and development grid-lock. Simplification favors data specialization over code specialization. Second issue, when bindings are specialized instead of data, then the process breaks down. This is why I'm a "bind-all" type of person. If I'm stuck with limited bindings I'd rather take the hit using texture arrays.

This is also heavily related to memory organization. I don't dynamically allocate in the traditional sense. I always statically partition into fixed resources with bound limits at start time, with aliasing to maximize utilization. Machines have GB's of memory, why variable size dynamically allocate anything. Use a dead simple fixed size resource pool allocator, which is trivial to implement on highly parallel machines like GPUs.

Getting back to CPUs, having function call interfaces for everything is a disease. What I much rather have is a collection of ports or interfaces in a fixed layout in memory. Starting with something simple like time. If the hardware has an accessible wall-clock time interface, say via some ISA opcode, then I don't need a function to get the time. I just need a convention for this fixed layout in memory of where to find the base time value which I add to the ISA opcode results to get the real time. Now for keyboard access. Just give me a key bitarray at a fixed location in the memory layout. Have a background thread atomically OR bits on key presses, while I atomically AND out bits after I process key presses. How about file access. Just have a convention for maximum number of file handles, maximum path size, a bit array to flag entries which are new requested transactions, a bit array for the OS to reply that a transaction is finished, a convention that lowered array elements (requests) are completed first, etc, setup fixed arrays for this stuff in memory. Everything ends up being, write data to memory, then ring some kind of doorbell, to signal to the OS to get busy. There is no functional interface, just a text document which describes the memory layout and how to use it.

Bringing this back to GPUs, when the GPU can just write into CPU-side memory, and there is some convention for triggering an interrupt, or some OS convention to poll at a rate in which interrupts are not necessary (aka you don't run hot then sleep, but rather stay live at lowest power state, with cores powered down), then the GPU and CPU simply access the same memory as the CPU to communicate with the OS. And perhaps you split the read and write sections of this memory, and not depend on atomics as in my prior examples, etc. New OS functionality doesn't need a new API when the GPU wants to access, the API = dead simple loads and stores.

20150910

September Trip to Iceland

Lodging
Stayed for six days around Reykjavik driving to other destinations, choosing a BnB via www.airbnb.com/s/Iceland. Planning most everything in advance proved very useful as the rental's wifi did not work. Did not data roam on the phone, and most of the interesting places won't have cell reception anyway. Cell phones are effectively useless as they are cloud navigation only devices, and GPS is also mostly fail, as it is impossible to figure out how to input various destinations. Instead had pre-printed paper maps and memorized the route to grocery, etc. Next time I'll try pre-printing the GPS coordinates, and standard full addresses as well.

Auto Rental
Rented a manual Suzuki Jimmy 4x4 from Budget. At first they didn't have a Jimmy and forced us to get an "upgrade" to a Nissan Qashqai. This was a serious downgrade. Avoid the Nissan Qashqai at all costs: it has a horrible computer controlled throttle which partly takes over during shifts, it auto turns off the engine when you are braked in neutral, it's clutch feels like it does not exist, and it drives like a front-wheel-drive car instead of a 4x4. Lucky for us the previous renters had topped off the tank with petrol instead of diesel. Early in the trip, I noticed what felt like a bad fuel pump, so I returned it. The girl at the rental return didn't notice the problem on a test drive, and started telling me they were going to charge me to fill up the tank on the exchange. Eventually they brought over their mechanic who had the smarts to sniff the diesel tank, and knew exactly what was wrong. Thankfully I left with a Jimmy, and they tossed in a free GPS for the trouble. The Jimmy is great, tiny, easy to heel-toe, a little under powered, ability to go between {2WD, 4WD, and 4WD low}, and fun on non-paved roads. But one word of warning, driving a Jimmy in Iceland's very windy weather can be more of an exercise in sailing rather than driving. Either that or the Jimmy I rented was way out of alignment. Felt like the 4x4 didn't have enough toe-in in the front, very twitchy, near impossible to drive in a straight line at speed with the combination of Icelandic cross winds, warped tyre grooves in the roads, and the wake of large trucks going the opposite direction. Lots of fun.

Food
Lived off cheeses, smoked fish, and water for breakfast. Lots of great places to grab dinner around Reykjavik we liked: Fiskmarkaðurinn, Grillmarkaðurinn, and Forréttabarinn. Short summary: langoustine are quite awesome, horse was fantastic, minke tastes like one would expect from a red meat mammal with a twist, and puffin tastes like a cross between liver and fish. Also managed to catch a live DJ'ed plaza broadcast of the Iceland vs Kazakhstan game one night down-town in the plaza next to the Fiskmarkaðurinn.

Excursions
Found www.extremeiceland.is to be quite useful as it collects a huge amount of options in one place with photos. We did a 2 hour Cessna air tour from www.eagleair.is on our first full day in Iceland. It is the best way to really understand Iceland. The photo in this post was taken on that flight. We pre-booked, and got super lucky because that day was the only day not rained out in the week we were there. Tried to do a RIB boat whale watching tour, but that got canceled due to weather. Did one of the tours to a man-made cave in the Glacier Langjökull, was the wife's favorite part of the trip. The dirt road drive to the base of Langjökull was quite fun, I'd advise just skipping standard hotel to event tour bus options. The South drive to Sólheimajökull has a lot of great waterfalls. Apparently this was a great time to attempt to see the Northern Lights, except the weather didn't work out, and we didn't pre-search a good viewing location (fail on no wifi). Last day tried the Blue Lagoon which seems like a man made pool covered in some mineral sand which gets geo-thermal sea water runoff from the power plant. The contrast between being freezing out of the water and cooked in the water was quite refreshing. I would really like to go back and do the natural ice cave tours in the winter.

Self-Correcting CPU Pipelines

Elaborating an idea which came up in a prior conversation...

The aim is to make CPU programming as fun and easy as iterative run-time edit reload based GPU shader programming. Re-purpose the CPU page tables for intra-app memory protection between in-app tasks. Effectively providing hardware protection for functional programming. When in-app tasks switch, adjust page table protection, then use INVLPG (on 486 and up) to flush TLB entries for changed pages. The page tables change to disallow writes to now read-only data from the prior completed job, and to allow writes to write-able data for the current job. Use x86's support for various page sizes and pre-staged page tables for fixed jobs to make this efficient. Setup a background watchdog to act as a TDR check, to reset to a stable state if any task runs too long (infinite loop, etc). The "kernels" or CPU jobs which transform data can also be designed to be somewhat self correcting, using min or max to limit indexes, etc...

20150829

Ketogenic Diet - Working on Year 2

Part way into year two on a Ketogenic diet, breaking the diet only once and a while on business trips. The diet is basically mostly fat, some protein, with almost no carbs.

Initially established in the 1920s as a way to control seizures for people with epilepsy, the Ketogenic diet is being successfully used as a metabolic treatment for cancer by a few individuals, but is largely being ignored by medical professionals. The diet works by shifting the body's metabolism from glucose (sourced from carbs or converted from excess protein) to ketones produced in the liver (sourced from fat and oil). The diet has natural anti-inflammatory properties. The theory of how a Ketogenic diet fights cancer revolves around the idea that cancer is mostly a mitochondrial metabolic disease. Specifically that cancer cells tend to have damaged mitochondria which switch to a more primitive glucose fulled fermentation as their primary energy generation process. Starving cancer cells of glucose places them in extreme metabolic stress, allowing the body to fight back. One of the primary ways to track cancer is by looking at the process of tumor angiogenesis via periodic MRIs with contrast. Effectively watching over time as the cancer causes the body to develop ever stronger network of blood vessels to feed the cancer with glucose. Successful treatments of cancer can reverse this process. I suspect ultimately that everyone has cancer even if only at some undetectable amount. The question is if the body's balance shifts between a state which enables the cancer to grow, or a state which causes the cancer to die. Cancer becomes terminal when there is no longer a way to shift back the balance.

The Ketogenic diet for me is a lifestyle choice not made out of medical necessity. My personal tastes tend to really align with the diet, and it is a great way to stay in shape, more so when you have a career sitting at a desk typing away on keyboards. Counter to how the media vilifies fat as the source of the nation's obesity problem, it is near impossible to maintain body fat on the diet which involves mostly eating fat: the body is in a constant state of fat burning, instead of fat storage.

Looking back, it was relatively hard to get started. The realization that America's entire food culture and supply chain is optimized for the delivery of carbs, leaves a demilitarized zone filled with land mines for the oil fueled consumer. Just finding things which are in the parameters required for the diet can be quite a challenge. After a while, planning every meal, weighting all ingredients, measuring ketone levels or blood glucose levels, is replaced with driving by feel alone. The transition between carb burning and ketone burning body state goes through a standard process of horrible sugar withdrawal symptoms, bouts of fatigue and brain fog, eventually returning to the feeling of being normal, but then unaffected by the standard cravings carb eaters have. The first transition takes weeks, however after being ketone burning for this long, the transition now only takes me a few days.

Over time the diet becomes as enjoyable as the standard high-carb diet, and even more so in many regards, because of the ability to easily take in 70% fat at a given meal (like bacon wrapped sour cream). Unlike sugar, there is no crash afterwards, and the body provides some rather strong signals to stop eating before you over do it, instead of telling you to keep going as is the standard practice with sugars. Here is an example of the kind of foods my wife and I eat: butter on low heat, mixed in spice, garlic, and boiled shrimp. Consumed head, shells and all, with sour cream on the side straight to bring up the fat content,

Maybe this post should just be called "Indie Shock". Interesting graph below posted on twitter of the number of Steam game releases over time. Saturating isn't it?

Thoughts From Personal Perspective as a Consumer
Engines like Unity and Unreal make it much easier to produce games, but the games tend to be more similar, staying within the limitations imposed by the these mass market engines. Same effect happens as independent engine tech all falls into the same local minimum, or developers limit risk by staying in the confines of well walked genre. This makes it harder for a consumer to differentiate between titles. Choice in a sea of noise is random. It is not as much the content which shapes purchase decision in that case, but rather how the consumer gets directed by marketing.

As it becomes harder to choose, and as more choices result in failure of satisfaction, the barrier to purchase increases, and even the falling price cannot compensate. The price of free is actually quite high: the opportunity cost of doing something more compelling with one's time.

"Nobody Cares" aka the Excuse For Being Mediocre
Why bother investing time to achieve greatness? Proponents of this line of thinking often present justification in the form that the average consumer cannot tell the difference between low and high quality. For a producer this is effectively a self selecting choice to continue to swim in that sea of noise. Some forms of greatness may not be perceived at the conscious level, may not be something a consumer can articulate in words, but instead may only manifest in feel and yet have profound effect.

Outliners
Knowledge of excellence in some aspect which effects only a fraction of the market, say awesome multi-GPU support, establishes a hint that the producer cares about the product at a level beyond serving me a microwaved pre-constructed hamburger. It is very hard to maintain employment of the creative and driven individuals which produce top content without allowing them to strive for greatness, even sometimes at the compromise of maximum profitability.

As a consumer in a sea of noise, I select for the expression of producers looking to be the best that is possible. Who, given the limitation of architecture and time, choose paths which compromise in a way which allows a unique realization of the ultimate form of their art.

20150818

Quick ACES Thoughts

Appears that the RRT global de-saturate step applied in AP1 drops to a gamut smaller than Rec2020. This seems to be ok when targeting Rec709/sRGB but not sure if this is future proof in the context of Rec2020. Seems like the reference ACES ODT for Rec709 at 48 nits ends up with gamut clipping when inputs to the RRT had covered the full positive part of AP1 space. Those working with sRGB/Rec709 primaries in the rendering pipeline might not have issues here depending on how much saturation is added during grading before the RRT. Guessing some people would rather be able to go nuts anywhere in the human perceptual space and have it smoothly map to the output display space?

20150814

The Written Word

Growing older, I find that games, movies, TV are all limiting forms of entertainment, and that by far the best form of story driven consumables is the book. Right now I'm half through On the Steel Breeze by Alastair Reynolds, taking a break to reflect. Something was lost over the years as digital entertainment has evolved from the soup of interactive text adventures. Certainly enjoy the visual representation of a good story, but I enjoy more the freedom to explore stories which could never gain the support necessary for a non-literary translation. Early in gaming there was an interesting balance forced by the limitations of the machine, where the written word took the place of electronically "physically" realizing everything in the game. Would be great once and a while to trade the modern game single player storyline, played out in "cut scenes", with a story of the caliper of a great novel, represented instead in "cut pages" of text. Then shifting the focus of development and polish back into the game itself.