Join us on Discord!
You can help CodeWalrus stay online by donating here.
Menu

Show posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Show posts Menu

Messages - TheMachine02

#1
The first tool to be able to downgrade is now out https://tiplanet.org/forum/archives_voir.php?id=1398736  :P
#2
Nope :trollface:

Seriously, this is cool.
#3
I doubt there will be major user syntaxic change now, merely new features. So you can quite start doing something  :P
#4
Actually the job la part of the school so yeah I still have school. And also a super important exam in 3 years  :P
#5
Yeah, it is counting everything  :P

Anyway I've entered a low activity phase (like reallly low), which is quite my futur right now (busy job!). So we'll see how that will evolve. Code is now on github !  https://github.com/TheMachine02/Virtual3D
#6
The 7cc is for flat shading, it is just a ldir. Texturing come at about 26cc for unlighted, 32cc for lighted.
#7
Yeah I really should indeed. There wont be much improvement to flat filling though - havent touch the routine in ages - but texturing is an order of magnitude faster now.
I need to grab an old demo with some timing though to measure exactly the improvement. Maybe midna with the skybox ? (I need to see if I can fit texture in now caise the slybox I used was pretty huge, and I hard limited texture to 256x256)
#8
It did supported some advanced stuff for the time, mainly gouraud shading along side with texture. Filtering was pretty minimal though, vs the N64 for example, but it has plenty of RAM.

Anyway, I started to implement bounding box, as they are finally quite easy to make them working in the pipeline, and hopefully should improve performance quite a bit  :P
#9
It has a dedicated GPU with 1Mb of VRAM indeed, which was quite efficient (well, for the time). It only supported linear mapping though. So yeah, pretty good  :P
#10
Sure thing. Those level are end of life PSX quality grade, so don't wonder why there a bit heavy for the calc  :P Even considering the fact I have no (or limited vs PS1) lightning, the simple fact that it run *only* 8x slower (30fps vs 4-3fps) make me happy  :D (since the ez80 definitly doesn't have PSX hardware  :P )
#11
A second issue may arise, that is the N64 use a lot of repeating texture. And my engine doesn't support texture repeating at boundary. The fix is quite simple ; and fall in the same categorie of the issue discuted by tr1p1ea, break large polygons into smaller one. For the visual complexity of the game, well I must said that the previous screenshot doesn't include ANY way of doing thing proprely, it just send all polygons through the library (and vertex as well), which is, well, a bad way to do thing :P One of the primary optimization one can do in such context is bounding boxes, test a whole sub level part against frustrum with 8 vertices et skip it enterily if it is outside. Given that the library can handle different stream of polygons/vertex from different location and doesn't expect only one consecutive stream, this is a very interesting thing to do. Second optimization that may be doable is to reduce the far plan visibility, ie how far can you see polygons. PS1 used this technique very often, using some fog. We won't be able to do fog here, but it is still a good thin to have given that depth is computed for sorting anyway. Third reside in library optimization itself, which aren't at all done :P
As a side note, I need to also said that the current pipeline must process all vertex (this is mandatory due to clippping). So removing some vertex through the bounding boxes is even a better idea than before (2000 vertex process in the last screenshot IS heavy, at about 90ms).

Just to give you an idea, through code refactoring color glib is almost 400% faster than early version - and it is mostly through new algorithm / better pathway, not much from code micro-optimisation, apart from the texture mapping routine, in which I've already put a lot of work. So given the opportunity, (and mostly because two of the three optimization ins't glib work, but instead more of the user work, and I don't use my library well  :P ), the complexity may be pretty high. I also may start to do a more advanced use of the library for following demo because well I want more speed obviously.

To finish, well if one have the level, I can simply convert it and load it up in the engine  :)

EDIT : huh, I forgot to include last eye-candy, shame of me  :P



EDIT 2 : well I said that bounding should be user made, that is not totally true, as I could also make it mandatory, but I fear I may lose in flexibility. What do you think ?
#12
It isn't dead !


(London, the whole level 4 of tomb raider. ~3500 triangles and 2000 vertices)

Don't look at the lightning, it is broken  :P

Seriously optimizing is taking most of my time now, and it is ... like .. you know hard  :P
#13
Definitly awesome.  :)  How will you handle lightning with custom palette ?
#14
Who said I was alive in the first place ?  :P

Anyway this model is currently @1000 triangles but I will do a very low poly model @400 triangles. At least I have a highres head which will be usefull for animation.
#15
Stilllll aliiiiivvveeeeee. J/k  :P

I am currently texturing / learning blender / hacking models / programming and doing other stuff.

This is what I currently have :

Powered by EzPortal