From the Burrow

Hello progress my old friend

2016-12-06 15:52:25 +0000

Ah this week was so much better, my brain and I were on the same team.

I made good progress in my compiler with first class functions. The way I implemented it is roughly as follows:

I make a class to represent compile-time values

(defclass compile-time-value (v-type)
  (ctv))

It inherits from v-type as that is the class of my compiler’s types.

It has one slot called ctv that is going to store what the compile things the actual value is during compilation.

IIRC this associating of a value with a type is called ‘dependent types’. However I’m going to avoid that name as I don’t know nearly enough about that stuff to associate myself with it. I’m just going to call this compile-time-values or ctvs.

Next we need a type for functions.

(defclass function-spec (compile-time-values)
  (arg-spec
   return-spec))

Here we make a type that has a list of types for the arguments (arg-spec) and a list of types for the returns (return-spec). Return is a list as lisp supports multiple return values. Being a ctv the compiler can now associate values with this type.

Note we don’t have a name here as this is just the type of a function, not any particular one. In my compiler I have a class called v-function that describes a particular function. So there is a v-function for sin for example.

In lisp to get a function object we use the #' syntax. So #'print will give you the function named print. #'thing expands to (function thing) so in my compiler I defined a ‘special form’ called function that does the following:

  1. look up the v-function object for that name
  2. make an instance of function-spec with the result of step 1 as the ctv
  3. use the result of step 2 as the type of this form.

Nice! this means the specific function is now associated with this type and will be propagated around.

(let ((our-sin #'sin))
  (funcall our-sin 10))

Later our compiler will get to that funcall expression. It will look at the type of our-sin and see the ctv associated with it. It will then transform (funcall our-sin 10) to (sin 10) and compile that instead.

Functions that take compile time values as arguments

We do a very simple hack when it comes to this. If we have something like this:

;; this takes a func from int to int and call it with the provided int
(defun some-func-caller ((some-func (function (:int) :int))
                         (some-val :int))
  (funcall some-func some-val))

And we call it in the following code:

(labels ((our-local-func ((x :int))
           (* x 2)))
  (let ((our-val 20))
    (some-func-caller #'our-local-func our-val)))

Then the compiler will swap out the (some-func-caller #'our-local-func our-val) call with a local version of the function with the compile time argument hardcoded

(labels ((our-local-func ((x :int))
           (* x 2)))
  (let ((our-val 20))
    (let ((some-func #'our-local-func))
      (labels ((some-func-caller ((some-val :int))
                 (funcall some-func some-val)))
        (some-func-caller our-val)))))

The some-func var is in scope for the local function some-func-caller so the transform we mentioned earlier will just work. The rest is just a local function transform and the compiler already knew how to do that.

Things get more complicated with closures and I havent finished that. I can now pass closures to functions but I cannot return them from functions yet. I know how I could do it but it feels hacky and so I’m waiting for more inspiration before I try that part again.

Primed for types

With all this compiler work my brain was obviously in the right place to start things about static typing in general. Being able to define your own type-system for lisp is something I have wanted for ages, but as support for this isn’t built into the spec I’ve been trying to work out what the ‘best approach™’ is.

quick notes for those interested. Lisp has an expressive type system and a bunch of features to make serious optimizations possible. However it doesnt have something to let me define my own type system and use it to check my lisp code.

The problem boils down to macroexpansion. If you want to typecheck your code you want to expand all those macros so you are just working with vars, functions & special-forms (dont worry about these). However there isn’t a ‘macroexpand-all’ function in the spec[0]. There is a function for macroexpanding a macro form once, however this does not take care of things like the fact that you can define local, lexically scoped macros. This means there is an ‘expansion environment’ that is passed down during the expansion process and manipulating this is not covered by the spec.

There is however a tiny piece of fucking voodoo code that was written by one of the lisp compiler guys. It allows you to define a locally scope variable that is available at compile time within the scope of the macro. With this i can create and object that will act as my ‘expansion environment’ and let me have what I need.

Anyhoo, the other day I case up with a scheme for defining blocks of code that will be statically checked and how I will do macroexpansion. It’s not perfect, but it’s predicable and that will do for me.

I am going to make a library who’s current working title is checkmate. It will provide these static-blocks and within those you will be able to emit facts about the expressions in your code. For function calls it will then call a check-facts method with the arguments for the function and all the facts it has on them. You can overload this method and provide your own checking logic.

The facts are just object of type fact and can contain anything you like. And because you implement check-facts you can define any logic you like there.

This should give me a library which makes it easier to define a type system. I can subclass fact and call that type and inside check-facts I implement whatever type checking logic I like.

A while back I ported an implementation of the Hidley (Damas) Milner checking algorithm to lisp so my goal is to make something where I plug this in and get classic ML style type checking on lisp code.

Wish me luck!

Next?

I’m not sure, my next year is going to contain a lot of study so I hope I can keep on top of these projects as well. The last few weeks have certainly reminded me to trust my instincts on what I should be working on, and it’s good to feel ‘out of the woods’ on that issue.

Peace

[0] Although a bunch of implementation do provide one. I made a library to wrap those so technically I do have something that could work

Let it lie

2016-11-28 13:37:22 +0000

I missed a week (SSSHHAAAME) because I didn’t get much of note done. I got mapping over tables working, along with destructuring of flat record data but the output -v- how much time I was sitting in front of the machine simply didnt add up.

I’ve decided to stop fighting my brain and just put the project down for a bit. This sucks as it means I fail my November goal but I just have to accept that I either need to force myself to do something my brain isn’t enjoying (which isn’t the best way to wind down after work) or do something else. At the very least I get to confirm things I have been learning about how I learn/work, so that is some kind of positive I can scrape from this.

With this accepted I started looking at first-class functions in my shader compiler (Varjo).

It’s been a month since I touched this, so I spent a little time re-familiarizing myself with the problem and then I got to work.

First order of business was to get funcall working with variables of function type that are in scope. Something like this:

(let ((x #'foo))
  (let ((y x))
    (funcall y 10)))

I got the logic working for the above and then I spent a few hours making some parts of my compiler more strict about types. Some areas were just too forgiving about whether you had to provide a type object or let you pass a type signature instead. This made some other code more complicated that it needed to be. This was a relic from a much older version of the compiler.

I then spent some time thinking about how to handle passing functions to functions. I can use my flow analyzer and multiple passes but I don’t want to use that hammer if things can be easier.

For example let’s take this:

(labels ((foo ((f (function (:int) :int))) ;; take a func from :int -> :int and call it with 10
           (funcall f 10))

         (bar ((x :int)) ;; some func from :int -> :int
            (* x 2)))
  ;; do it
  (funcall #'foo #'bar))

I can replace this the (funcall #'foo #'bar) with this:

(labels ((foo ()
           (let ((f #'bar))
             (funcall f 10))))
  (funcall #'foo))

which will get turned into

(labels ((foo ()
           (bar 10)))
  (funcall #'foo))

This means I generate a new local function for each call-site of a function that takes functions. The compiler will any remove duplicate definitions.

At this point it’s worth pointing out one of the design goals of this feature. Predictability. This code is valid lisp:

(defun pick-func ()
  (if (< some-var 10)
      #'func-a
      #'func-b))

(defun do-stuff ()
  (funcall (pick-func) 10))

But at runtime we can’t pass functions around, so the best we could do for the above is to return an int and switch based on that.

int pick_func() {
 return (some_var < 10) ? 0 : 1;
}

void do_stuff () {
  switch (pick_func()) {
  case 0:
    func_a(10);
    break;
  case 1:
    func_b(10);
    break;
  }
}

This would work but this pattern can be slow if used too much. For now Varjo instead chooses to disallow this and make you implement it yourself. This means there are less cases where you are guessing what the compiler is up to if your code is slow. The compiler will be able to generate very precise error messages to explain what is happening in those cases.

That’s all for now. I’ve also got a bunch of ideas for this that are still very nebulous, I’ll write more as they become concrete.

Ciao

Trudge

2016-11-15 16:58:16 +0000

This week I have been kind of working on the data-structure thing I mentioned last week.

The reason that it is ‘kind of’ is that I am having a big problem focusing on actually coding it. Over the course of the weekend I procrastinated my way into watching 3 movies rather than coding.

It is an odd one as I love the idea of the project, I want it to exist and (at least in the abstract) I am really interested in the implementation. However, for whatever reason, I am just struggling to stay focused when coding the damn thing. I haven’t pinned down what the issue is, but if I do I’ll report it here.

OK so what did I do?

  • A bunch of small benchmarks to prove premise to myself
  • Defined the base types
  • Worked out how the live redefining of types will work.
  • Started work on record types (which will be the types of the columns of the tables)

The third one took the most time as I want both the old and new type to exist simultaneously, this will allow me to create the type and then sweep through the existing tables to try and upgrade them to the new type. If this fails for some (or the users halts it for some reason) then we can still keep working with both the old and new types. I had to prove to myself that I could do this in a way that wouldn’t just pollute the namespaces with crazy numbers of types & functions.

Another great side effect of this is that we can compile the types and functions with optimization set to high, this gives us the most accurate picture of how our code will behave when we ship it. We can do this as only the tables implementation calls these functions or uses these types directly, so there is almost no place that this will cause the user issues (unless they go out of their way to do that).

Sadly that’s all for this week. Let’s hope next week goes a little better

My plan for november

2016-11-07 21:34:06 +0000

I’ve been writing up what I’ve been up to every week on a site called everyweeks.com. It was started by some friends and I’ve been trying to keep myself to it. However it has meant I’ve been bad as posting here.

Before now I’ve felt it rude to just dump a link here each week. I thought I should be writing fresh content for each site. But fuck it, if the choice is a weekly link to an article or a dead blog..I choose the link.

So here is this weeks one. It’s a plan of something I want to make this month.

Much reading but less code

2016-08-08 12:44:09 +0000

TLDR: If you add powerful features to your language you must be sure to balance them with equally powerful introspection

I haven’t got much done this week, I have some folks coming from the uk soon so cleaning up and getting ready for that. I have decided to take on more projects though :D

Both revolve around shipping code which was the theme last week as well. At first I was looking at some basic system calls and it seemed that, if there wasn’t a good library for this already then I should make one. Turns out that was mostly me not knowing what I’m doing. I hadn’t appreciated that the posix api specifies itself based on certain types and that those types can be different size/layouts/etc on different platforms, which means that we can just define lisp equivalents without knowing those details. To get around this we need to use cffi’s groveler but this requires you to have a C compiler set up and ready to go. This, in my opinion, sucks as the user now has to think about more than the language they are trying to work in. Also because all libraries are delivered though the package manager you tend not to know about the C dependencies until you get an error during build which makes for pretty poor user experience.

To mitigate this what we can do is cache the output of the C program along with info on what platforms it is valid for. That second part is a little more fiddly as the specification that is given to the groveler is slightly different for different platforms and those difference are generally expressed with read-time conditionals. If people used reader conditionals in the specification then the cache is only valid if the result of the reader-conditionals matches. The problem is that the code that doesn’t match the condition is never even read so there is nothing to introspect.

One solution would be tell people not to use reader-conditionals and use some other way of specifying the features required, we would make this a standard and we would have to educate people on how to use it.

But #+ #- are just reader macros so we could replace them with our own implementation which would work exactly the same except that it would also record the conditions and the results of the conditions.

This turned out to be REALLY easy!

The mapping between a character pattern like #+ and the function it calls is kept in an object called a readtable. We don’t want to screw up the global readtable so we need our own copy.

    (let ((*readtable* (copy-readtable)))
      ...)

*readtable* is the global variable where the readtable lives, so now an use of the lisp function read inside the scope of the let will be using our readtable (this is effect is thread local).

Next we replace the #+ & #- reader macros with our own function my-new-reader-cond-func:

    (set-dispatch-macro-character #\# #\+ my-new-reader-cond-func *readtable*)
    (set-dispatch-macro-character #\# #\- my-new-reader-cond-func *readtable*)

And that’s it! my-new-reader-cond-func is actually a closure over an object that the conditions/results are cached into but that’s just boring details.

The point is we can now introspect the reader conditions and know for certain what features were required in the spec file, and we do this without having to add any new practices for library writers.

This the reason for the TLDR at the top:

If you add powerful features to your language you must be sure to balance them with equally powerful introspection

Or at least trust programmers with some of the internals so they can get things done. You can totally shoot your foot off with these features, but that ship sailed with macros anyway.

I wrapped all this up in a library you can find here: https://github.com/cbaggers/with-cached-reader-conditionals

Other Stuff

Aside from this I:

  • Pushed a whole bunch of code from master to release for CEPL, Varjo, rtg-math & CEPL.SDL2

  • Requested that that new with-cached-reader-conditionals library and two others (for loading images) are added to quicklisp (the lisp package manager) So more code shipping! yay!

Enough for now

Like I said, next week I’ll have people over so I won’t get much done, however my next goals are:

  • Add groveler caching to the FFI

  • Add functionality to the build system to copy dynamic libraries to the build directory of a compiled lisp program. This will need to handle the odd stuff OSX does with frameworks

Seeya folks, have a great week

Stuck in port

2016-08-01 12:08:40 +0000

I have been looking at shipping cl code this week, some parts of this are cool, some are depressingly still issues.

save-and-die is essentially pretty easy but as I use C libraries in most my projects it makes things interesting as when the image loads and looks for those shared-objects again the last place it found them. That’s ok when I’m running the project on the same machine but not when the binary is on someone else’s.

The fix for this is to, just before calling save-and-die, close all the shared-objects, and then make sure I reload them again when the binary starts. This is cool as I can use #’list-foreign-libraires and make local copies of those shared-object files and load them instead. Should be easy right?..Of course not.

OSX has this concept called run-path dependent libraries, feel free to go read the details but the long and the short is that we can’t just copy the files as they have information baked in that means it will look for dependent libraries outside of our tidy little folder.

Luckily the most excellent shinmera has, in the course of wrapping QT run into everything I am and plenty more and dealth with it ages ago. He has given me some tips on how to use otool and install_name_tool. Also from talking to Luis on the CFFI team we agree that at least of the unload/reloading stuff could be in that library.

To experiment with this stuff I started writing shipshape which let’s me write (ship-it :project-name) and it will:

  • compile my code to a binary
  • pull in whatever media I specify in a ‘manifest’
  • copy over all the C libraries the project needed and do the magic to make them work.

This library works on linux but not OSX for the reasons listed a couple of paragraphs up. However it is my playpen for this topic.

In the process of doing this I had a facepalm moment when I realized I hadn’t looked beyond trivial-dump-core to see what was already available for handling the dump in a cross platform way. Of course asdf has robust code for this. I felt so dumb that I know so little about ASDF that I resolved to thoroughly read up on it. Fare’s writeups are awesome and I had a great time working through them. The biggest lesson I took away was how massively underspec’d pathnames are and, whilst uiop does it’s best, there are still paths that cant be represented as pathnames. Half an hour after telling myself I wouldnt try and make a path library I caught my brain churning on the problem so I started an experiment to make to most minimal reliable system I can, it’s not nearly ready to show yet but the development is happening here. The goal is simply to be able to define and create kinds of path based on some very strict limitations, this will no be a general solution for any path, but should be fine for unix and ntfs paths. The library will not include anything about the OS, so then I will look at either hooking into something that exists or making a very minimal wrapper over a few functions from libc & kernel32. I wouldnt mind using iolib but the build process assumes a gcc setup which I simply cant assume for my users (or even all my machines).

One thing that I am feeling really good about is that twice in the last week I have sat down to design APIs and it has been totally fluid. I was able to visualize the entire feature before starting and during the implementation very little changed. This is a big personal win for me as I don’t often feel this in any language.

Wow, that was way longer than I expected. I’m done now. Seeya!

A bunch o stuff

2016-07-26 13:10:32 +0000

Ok so it’s been a couple of weeks since I wrote anything, sorry about that. 2 weeks back I was at Solskogen which was awesome but I was exhausted afterwards and never wrote anything up, let’s see if I can summarize a few things.

PBR

I’m still struggling with the image based lighting portion of physically based rendering. Ferris came round for one evening and we went through the code with a comb looking for issues. This worked really well and we identified a few things that I had misunderstandings about.

The biggest one (for me) was a terminology issue, namely if an image is in SRGB then it is non linear, but an image in an SRGB texture is linear when you sample it. I was just seeing SRGB and thinking ‘linear’ so I was making poor assumptions.

It was hugely helpful to have someone who has a real knowledge and feel for graphics look at what I was doing.

I have managed to track down some issues in the point lighting though so that is looking a little better now. See the image at the top for an example, the fact that the size of the reflection is different on those different materials is what I am happiest to see.

C Dependencies

Every language needs to be able to interface with C sooner or later. For lisp it was no different and the libraries for working with shared objects are fantastic. However the standard practice in the lisp world (as far as I can see) is not to package the c-libraries with the lisp code but instead to rely on them being on the user’s system.

This is alright on something like linux but the package manager situation leaves a lot to be desired on other platforms. I have been struggling with Brew and MacPorts shipping out of date or plain broken versions of libraries. Of course I could work on packaging these myself but it wouldnt solve the Windows case.

Even on linux sometimes we find that the cadance of the release cycles is too slow for certain libraries the asset importing library I use is out of date in the debian stable repos.

So I need to package C libraries with my lisp libraries, that ok, I can do that. But the next issues is with users shipping their code.

Shipping Lisp with C Libraries

Even if we package C libraries with lisp libraries this only fixes things for the developer’s machine. When the dump a binary of their program and give it to a user, that user will need those C libraries too.

The problem is that ‘making a binary’ in lisp generally means ‘saving the image’. This is like taking a snapshot of your program as it was in a particular moment in time..

Every value

Every variable

Every function

..is saved in one big file. When you run that file the program comes back to life and carries on running (some small details here but meh).

The problem is that one of the values that is saved is the location of the c libraries :p

So this is what I am working on right now, a way to take a lisp project, find all the c-libraries it uses, copy them to a directory local to the project and then do some magic to make sure that, when the binary is run on another person’s machine, that it looks in the local directory for the C dependencies.

I think I have 90% of the mechanism worked out last night. Tonight I try and make the bugger work.

Misc

  • Fixed bugs running CEPL on OSX
  • Added implementation specific features to lisp FFI library
  • Add some more functions to nineveh (which will eventually be a library of helpful gpu functions)
  • Fix loading of cross cubemap hdr textures
  • add code to unbind samplers after calling pipelines (is this really neccesary?)
  • fix id dedup bug in CEPL sampler object abstraction
  • added support for Glop to CEPL (a lisp library that can create a GL context and windows on various platforms, basically a lisp alternative to what I’m using SDL for)
  • GL version support for user written gpu functions
  • super clean syntax for making a fbo with color 6 attachment bound to the images in a cube texture.
  • support bitwise operators in my gpu functions
  • Emit code to support GLSL v4.5 implicit casting rules for all GLSL versions

And a pile of other bugfixes in various projects

Misc bits

2016-07-10 20:39:56 +0000

A few things have been happening

Geometry

Fixed a few bugs in CEPL so a user could start making geometry shaders, AND HE DID :) He is using my inline glsl code as well which was nice to see. Cross platform stuff

Fixed bug in CEPL which resulted from changes to the SDL2 wrapper library I use. The guys making it are awesome but have slightly different philosophy on threading. They want to make it transparent as possible, I want it explicit. Their way is better for starting out or games where you are never going to have to worry about that performance overhead, but I can risk that in CEPL.

The problem that arose was that I was calling their initialization function and was getting additional threads created. To solve this I just called the lower level binding functions myself to initialize sdl2. As ever though huge props to those guys, they are making working with sdl2 great.

I also got a pull request fixing a windows10 issue which I’m super stoked about.

With this CEPL is working on OSX and Windows again

PBR one day..

Crept ever closer to pbr rendering. I am going too damn slowly here. I ran into some issues with my vector-space feature. Currently spaces must be uploaded as uniforms and I don’t have a way to create a new space in the shader code. I also don’t have a way to pass a space from one stage to another. This was an issue as for tangent-space normals you want to make tangent space in the vertex shader and pass it to the fragment shader.

To get past this issue more quickly I added support for the get-transform function on the gpu. The get-transform function takes 2 spaces and returns the matrix4 that transform between them.

This just required modifying the compiler pass that handled space transformation so didnt need much extra code.

Filmic ToneMapping

A mate put me onto this awesome page on ‘Filmic Tonemapping Operators’ and I obviously want to support HDR in my projects so I have converted these samples to lisp code. I just noticed that I havent pushed this library online yet, but I will soon.

The mother of all dumb FBO mistakes

I have been smacking my head against an issue for days and it turned out to be a user level mistake (I was that user :p).

The setup was a very basic deffered setup, so the first pass was packing the gbuffer, the second shading using that gbuffer. But whilst the first pass appeared to be working when drawing to the screen it was failing when drawing to an FBO, the textures were full of garbage that could only have been random gpu data and only one patch seemed to be getting written into.

Now as I havent done that enough testing on the multi render target code I assumed that it must be broken. Some hours of digging later it wasnt looking hopeful.

I tested on my (older) laptop..and it seemed better! There was still some corruption but less and more of the model showing…weird.

This was also the first time working with half-float textures as a render target, so I assumed I had some mistakes there. More hours later no joy either.

Next I had been fairly sure viewports were involved in this bug somehow (given that some of the image looked correct) but try as I might I could not find the damn bug. I tripple checked the size of all the color textures.. and the formats and the binding unbinding in the abstrations.

Nothing. Nada. Zip

Out of desperation I eventually made an fbo and let CEPL set all the defaults except size…AND IT WORKED…what the fuck?!

I looked back at my code that initialized the FBO and finally saw it:

    (make-fbo `(0 :dimensions ,dim :element-type :rgb16f)
              `(1 :dimensions ,dim :element-type :rgb16f)
              `(2 :dimensions ,dim :element-type :rgb8)
              `(3 :dimensions ,dim :element-type :rgb8)
              :d))

That :d in there is telling CEPL to make a depth attachment, and to use some sensible defaults. However it also is going to pick a size, which as a default will be the size of the current viewport *smashes face into table*

According to the GL spec:

If the attachment sizes are not all identical, rendering will be limited to the largest area that can fit in all of the attachments (an intersection of rectangles having a lower left of (0 0) and an upper right of (width height) for each attachment).

Which explains everything I was seeing.

As it is more usual to make attachments the same size I now require a flag to be set if you want attachments with different sizes along with a big ol’ explanation of this issue in the error message you see if you don’t set the flag.

Well..

With that madness out of the way I fancy a drink. Seeya all later!

Mainlining Videos

2016-06-27 10:42:40 +0000

It has been a weird weekend. I had to go to hospital for 24 hours and wasnt in a state to be making stuff but I did end up with a lot of time to consume stuff, so I thought I’d list down what I’ve been watching:

  • CppCon 2014 Lightning Talks - Ken Smith C Hardware Register Access This was ok I guess it is mainly a way of dressing up register calls so their sytax mirrors their behaviour a bit more. After having worked with macros for so long this just feels kinda sensible and nothing new. Still was worth a peek
  • Pragmatic Haskell For Beginners - Part 1 (cant find a link for this) - I watched a little of this and it looks like it will be great but I want to watch more fundamentals first and then come back to this.
  • JAI: Data-Oriented Demo SOA, composition - Have watched this before but rewatched it to internalize more of his approach. I really am considering implementing something like this for lisp but want to see how many place I can bridge lisp and foreign types in the design. I highly recommend watching his talk on implicit context as I think the custom allocator scheme plays really well with the data-oriented features (and is something I want to take ideas from too)
  • Java byte-code in practice - started watching this one but didnt watch all the way through as not relevent to me right now. I looked at this stuff while I was considering alternate ways to do on-the-fly language bindings generation, but I don’t need this now (I wrote a piece our new approach a while back)
  • Relational Programming in miniKanren by William Byrd Part 1 Part 2 - This has been on my watch list for ages, a 3 hour intro to mini-kanren. It was ace (if a bit slow moving). Nice to see what the language can and cant do. I’m very interested in using something like this as the logic system in my future projects.
  • Production Prolog - Second time watching this and highly recommended. After looking at mini-kanren I wanted to get a super highlevel feel on prolog again so watched this as a quick refresher of how people use it.
  • Practical Dependently Typed Racket Wanted to get a feel for what these guys are up to. Was nice to see what battles they are choosing to fight and to get a feel for how you can have a minimal DTS and it still be useful
  • Jake Vanderplas - Statistics for Hackers - PyCon 2016 - As it says. I feel i’m pitiful when it comes to maths knowledge and I’m very interested in how to leverage what I’m good at to make use of the tools statisticians have. Very simple examples of 3 techniques you can use to get good answers regarding the significance of results.
  • John Rauser keynote Statistics Without the Agonizing Pain - The above talk was based on this one and it shows, however the above guy had more time and cover more stuff.
  • Superoptimizing LLVM - Great talk on how one project is going about finding places in LLVM that could be optimized. Whilst it focuses on LLVM the speaker is open about how this would work for any compiler. Nice to hear how limited their scope was for their first version and how useful it still was. Very good speaker.
  • Director Roundtable With Quentin Tarantino, Ridley Scott and More I watched this in one of the gaps when I was letting my brain cool down. Nothing revalutionary here, just nice to hear these guys speak.
  • Measure for Measure: Quantum Physics and Reality - Another one that has been on my list for a while. A nice approachable chat about some differing approaches to the wave collapse issue in quantum phsyics.
  • Introduction to Topology This one I gave the most time. I worked through the first 20 videos of this tutorial series and they are FANTASTIC. The reason for looking into this is that I have some theories of the potential of automatic data transformation in the area of generating programs for rendering arbitrary datasets. I had spent an evening dreaming up what roughly I would need and then hada google to see if any math exists in this field. The reason for doing that is that you then know that smart people have proved whether you are wasting your time. The closest things I could find were based in topology (of various forms) so I think I need to understand this stuff. I’ve been making some notes so I’m linking them here but don’t bother reading them as they are really only useful to me.

That’s more than enough for now, I’m ready to start coding again :p

Peace.

p.s. I also watched ‘The Revenant’ and it’s great. Do watch that film.

Reading on the road

2016-06-21 09:02:56 +0000

Hey,

I don’t have anything to show this week as I have been travelling for the last few days. However this has given me loads of time to read so I’ve had my face in PBR articles almost constantly.

I started off with this one ‘moving frostbite to pbr’ but whilst it is awesome I found it really hard to understand without knowing more of the fundamentals.

I had already looked at this https://www.allegorithmic.com/pbr-guide which was a fantastic intro the subject.

After this I flitted between a few more articles but got stuck quite often, the issue I had was finding articles that bridged the gap between theory and practical. The real breakthrough for me was reading these two posts from codinglabs.net:

After these two I hada much better feel of what was going on and then was able to get much further in this article from Epic on the Unreal Engine’s use of PBR

Now this one I probably should have read sooner, but it was still felt good to go through this again with what I had gained from the Epic paper.

And finally I got back to the frostbite paper which is outstanding but took a while to absorb. I know I’m going to be looking at this one a lot over the coming months.

That’s all from me, seeya folks.

Mastodon