Yet another cool WebGL demo

I'm as well as selfish, but alas, I now and then also develop applications with a web interface. Which needs to run in IE. Oh how many times I asked my boss to drop IE support completely, except for showing a screen that enables the user to download any other decent web browser.
:lol:

just keep up the campaign...eventually he'll listen
 
I'll just wait. I can't see the point in getting excited about this stuff any more. It's the de-democratization of computers - making them all dependent on other people's servers.

I think maybe you are confusing this stuff "WebGL" with Cloud computing?? One doesn't necessarily have anything to do with the other. WebGL is about running apps on the 'client' side.

That's one of the ideas behind Opera Widgets. You can make apps, download and install them, and use them as stand alone apps with out Opera even running or being connected to the Web. It uses Opera as a framework. Sort of like Java or SDL.

Sort of the exact opposite of the Cloud. So you can make HTML, CSS3, WebGL apps that install natively already today.
 
I'll just wait. I can't see the point in getting excited about this stuff any more. It's the de-democratization of computers - making them all dependent on other people's servers.

Erm, WebGL != Cloud

WebGL is an emerging standard for OpenGL enabled web applications written in JavaScript. Nothing you couldn't run entirely from your own machine.
 
The truly shitty thing about WebGL and HTML5 is JavaScript. After all these years of technical evolution, the future now belongs to one of the crappiest languages ever made: JavaScript. Now THAT is depressing. I do hope that Google's Dart takes off, but I have to say that it's a highly unlikely scenario. We'll certainly be stuck with this crap for a long time to come. The only good thing is that more and more JavaScript will likely be generated code created by framework libraries, but still...
 
The truly shitty thing about WebGL and HTML5 is JavaScript. After all these years of technical evolution, the future now belongs to one of the crappiest languages ever made: JavaScript. Now THAT is depressing. I do hope that Google's Dart takes off, but I have to say that it's a highly unlikely scenario. We'll certainly be stuck with this crap for a long time to come. The only good thing is that more and more JavaScript will likely be generated code created by framework libraries, but still...

Actually, as far as scripting languages go, I don't mind JavaScript. It took a long time to reach that stage though; I disliked it for a long time. Mostly because of the lack of cross browser standardisation. Well, that and the fact that due to being a strictly compiler/assembler based language developer for such a long time all scripting languages seem a bit noddy, but that's a prejudice I've managed to get under control in recent years. The former issue has become less and less of a problem too.
The biggest gripe I have with it these days is it's prototype based OO model. However, if I'm totally honest, for a scripting language where variables are invariably (excuse the pun) dynamically typed, it makes it more useful, if far less "pure" than a more traditional class based model.

As for general performance, well Google have pretty much sorted that out with their V8 implementation, which pretty much compiles the entire set of scripts in a page into native code.
 
I fully understand that JavaScript is now perhaps the fastest script language ever (although that's mostly because V8 compiles it to machine code). Not knocking it for performance (although I find it funny that the makers of the fastest JavaScipt engine are also the ones that want to get rid of it). As a developer, I prefer readability and maintainability. Dynamic types rub me the wrong way. In fact, script languages rub me the wrong way. I find even Java to be too far removed for my liking. I know this is mostly a personal issue with me, there seems to be many developers out there who seem to love JavaScript. I can't see myself ever being part of that group. I think we can do better than JavaScript.
 
I fully understand that JavaScript is now perhaps the fastest script language ever (although that's mostly because V8 compiles it to machine code). Not knocking it for performance (although I find it funny that the makers of the fastest JavaScipt engine are also the ones that want to get rid of it). As a developer, I prefer readability and maintainability. Dynamic types rub me the wrong way. In fact, script languages rub me the wrong way. I find even Java to be too far removed for my liking. I know this is mostly a personal issue with me, there seems to be many developers out there who seem to love JavaScript. I can't see myself ever being part of that group. I think we can do better than JavaScript.

That all sounds very familiar. However, the readability and maintainability of JS code is down to whoever writes it. On it's own, it's a bit like a reduced set of C syntax and if you can write decent, self-documenting C, you can do the same for JS. One of the problems, however, is the prevalence of library frameworks such as jQuery. Don't get me wrong, it's a great library that simplifies many jobs, but it's emphasis is very much on event driven processing. Which again, is fine and makes sense for client side web applications, but if there is one pestilence it causes it's a total overuse of anonymous closures. A lot of jQuery rich code tends to end up as an absolute orgy of anonymous code that becomes increasingly difficult to read and comprehend. What annoys me even more is when people use them so much they don't even realise they've written the same basic function 20 times in their code because they've written a closure in each handler and never noticed that it could be refactored into a single function that can be passed to the 20 different event handlers they have. Consequently, code always gets more bloated and less readable.

Still, I don't blame JS or even jQuery for that, just lazy developers.
 
Ya, closures are kinda annoying that way. My first real exposure to them was when I started writing my little Android app in Java, back in March. So my experience is less then a year. At first I thought they were kinda neat and reminded me of lamda functions/expressions in C++11. However, they do seem to be overused a lot and it tends to make for some pretty ugly code. So I ended up reverting to my C++ ways of defining properly named classes and using those instead.

Anyway, hope you don't mind, but I'm gonna choose to keep a close mind on this and continue my hate for JS. :D ArsTechnica had a nice little article on JS and Dart: JavaScript has problems. Do we need Dart to solve them?
 
Ya, closures are kinda annoying that way. My first real exposure to them was when I started writing my little Android app in Java, back in March. So my experience is less then a year. At first I thought they were kinda neat and reminded me of lamda functions/expressions in C++11. However, they do seem to be overused a lot and it tends to make for some pretty ugly code. So I ended up reverting to my C++ ways of defining properly named classes and using those instead.

No matter how long I stray away, I always come back to C and C++ eventually ;)

Anyway, hope you don't mind, but I'm gonna choose to keep a close mind on this and continue my hate for JS. :D ArsTechnica had a nice little article on JS and Dart: JavaScript has problems. Do we need Dart to solve them?

No, I don't mind at all. Almost all of the objections I had towards JS as a language (as opposed to being a browser feature) still stand. I just don't care as much as I used to.
 
That all sounds very familiar. However, the readability and maintainability of JS code is down to whoever writes it.
Class inheritance, interfaces, reflection, generics, type safety, etcetera etcetera makes 4th generation sooooooo much nicer to develop for. True, currently I use GWT and that 'compiles' my Java to JavaScript. But the JavaScript syntax invites messy ad-hoc coding and I have seen quite some masterpiece examples of that. I see GWT merely as a patch on a big, big wound called HTML, HTTP and JavaScript.
 
No matter how long I stray away, I always come back to C and C++ eventually ;)
Like I mentioned in another thread, I was at a 2 day conference this week. One of the sessions was about "clean code". Anyway, the presenter was rather religious about breaking things down to as little as possible. So having 20 functions with one or two lines in that was great as far as he was concerned. I'm not sure I'd go so far as he does, but I certainly see the value in breaking things down if for no other reason than to give them a proper name that describes that's going on. Of course early on in the session he wrote off comments as counter productive. There's nothing stopping you from placing a comment around your code to describe it's purpose. However, comments aren't compiled and they can be out of date, misleading or just plain wrong. Packaging lines of code into a function with a well thought out name makes things much cleaner. That same rule would apply to anonymous inline classes some Java developers seem to love. By using an anonymous class you miss the chance to give it a proper name that will help future developers (or yourself) maintain that code years later.

So... am I beating a dead horse? :p
 
I'm not a big fan of reflective programming. Actually, it has some very good uses, but the coding fundamentalist in me keeps insisting that if you don't know what type something is, you either shouldn't be trying to use it, or your design is broken somewhere because it exposes you to having to deal with some completely anonymous type that you aren't equipped to handle without reflecting it.

On the flip side, dealing with layer abstraction, particularly between different systems, reflection becomes an invaluable tool for implementing adapters and so on.
 
So... am I beating a dead horse? :p

I dunno. It sounds like we largely agree on preferred coding practises, even if you loathe JS and I don't mind it. In my case, I have to use it regularly, so hating it is too much like effort nowadays.
 
I'm not a big fan of reflective programming. Actually, it has some very good uses, but the coding fundamentalist in me keeps insisting that if you don't know what type something is, you either shouldn't be trying to use it, or your design is broken somewhere because it exposes you to having to deal with some completely anonymous type that you aren't equipped to handle without reflecting it.

On the flip side, dealing with layer abstraction, particularly between different systems, reflection becomes an invaluable tool for implementing adapters and so on.
I know what you mean. I believe this reflective programming is still in it's early stages. I only use it with annotations, but I haven't found out yet how I can 'demand' an annotated method to return a value of a certain primitive type, (sub)class or interface, and the amount and types of parameters.
Surely, one should be able to query an unknown object?
And with that, I don't think things like serialization would be possible at all, without pointers that is..
 
Surely, one should be able to query an unknown object?

That depends on the context. If it's anonymous as part of some encapsulation / abstraction then it's probably best left anonymous. However, it also implies the system you are using has a flaw in that you have exposure to this unknown type in the first place - especially if all you can do is pass it from one library component to another; that's an implementation detail that hasn't been properly hidden from you.

And with that, I don't think things like serialization would be possible at all, without pointers that is..

Something serializable should advertise the fact by having a serializable interface that it implements. The moment it advertises that interface, it's no longer an anonymous type. You might not know what it actually is, but you know you can convert it to a stream representation. That might not sound useful, but it would be a reasonable strategy for implementing the Memento pattern. The object it holds the state for would know how to deal it's actual type, as a client, all you have to be able to do is receive it and store it.
 
As I stated, I only use reflection in combination with annotations. If a language would provide a strict 'method-safe' reflection system, so to say, interfaces on a method level instead of class level, and you query those methods by annotation, I don't see how that'd be much different to just ordinary interfaces. You can only make your libraries a lot more powerful with it. Even moreso, I use Hibernate in combination with annotations instead of the dreadful XML config files. Every aspect nicely in one object :)
 
Before opening up more WebGL pages, you may want to take a look at this: http://en.wikipedia.org/wiki/WebGL#Security.

In May 2011, security firm Context Information Security published a report that elaborated on a number of security issues present in current Google Chrome and Mozilla Firefox WebGL implementations and inherent to the WebGL specification. According to the report, WebGL fundamentally allows Turing-complete programs originating from the Internet to reach kernel-mode graphics drivers and graphics hardware. The report also provided references to example exploits of the security issues capable of causing denial of service and cross-domain image theft. The report concluded that "browsers that enable WebGL by default put their users at risk to these issues."[15]
 
Before opening up more WebGL pages, you may want to take a look at this: http://en.wikipedia.org/wiki/WebGL#Security.

It's a slightly vague warning. In order for it to be a proper threat, a GLSL program would have to be able to open up a known exploit in a given set of kernel mode drivers and find some way of getting code through to the CPU via that exploit. Otherwise, the sort of vulnerability we are talking about is freezing up your graphics driver. Which, if you use ATI catalyst stuff typically happens every half hour anyway ;)
 
True, but I have to wonder if GPU drivers are really built for security. Can't say I ever saw a security benchmark for the latest AMD chipset. Performance rules the GPU world and no one even thinks of security. Except the hackers of course. I'm sure there's plenty of exploits waiting out there. And most likely they'd need only target some embedded GPU that is hugely common. Like the one in all the Sandy Bridge CPUs out there.
 
In order for it to be a proper threat, a GLSL program would have to be able to open up a known exploit in a given set of kernel mode drivers and find some way of getting code through to the CPU via that exploit.

Admittedly, I know diddly squat about the ultra-modern PC architecture. But wouldn't it be trivial to grab the main CPU, if you have complete control over the programmable GPU? Couldn't you just use the GPU's DMA to modify whatever system RAM you want? I don't think the OS would know how to defend against a hostile PCIe card, and I don't know of anything that would ensure the GPU would respect the processors mapping of data/executable regions.
 
Back
Top