Quantcast

Page 26 of 104 FirstFirst ... 162223242526272829303676 ... LastLast
Results 376 to 390 of 1557

Thread: PS2 vs Dreamcast Graphics

  1. #376
    Raging in the Streets Team Andromeda's Avatar
    Join Date
    Jul 2010
    Posts
    2,548
    Rep Power
    27

    Default

    Quote Originally Posted by Silanda View Post
    Ok then, give us a definition of "true bump mapping".
    Bump Mapping .

    Sooo... no, then?
    For one that's a little quick to have a go and point out that's not technically correct and how's been simplified for the press Ect,ECT I would have thought you more than most you have been in agreement that its not actually true Bump-mapping.

    Andromeda is constantly rejecting anything that makes the Dreamcast sound impressive
    That it is it . This board must make SEGA's console sound the best ? Saying the DC didn't have Hardware support for Bump-mapping is just stating the fact. It took until the XBox Gen to see decent Bump-mapping on the consoles becuase both the PS2 and DC didn't really bother to use the effect

    Why the hell would I be playing a 480p game on a 480i CRT TV? Anything 480
    Back in 2001 when the XBox came out Plasma or LCD was a pipe dream and out of the price for most consumers and even in 2004 not many had TV's that supported 480p .

    The logic of this is basically saying unless you own a game you're completely fucking blind when viewing any footage or screenshots of it. The only aspect of a game that this matters on is gameplay, nothing else.
    No at all , people like you just going by Youtube may think Tourning Car looks good and runs ok -Anyone who's played and own the game would tell you its a heap of shit in the framerate Dept. Also Youtube doesn't show up things like games boarders and so on . So many may think the likes of Wave Race or Daytona USA were full screen , anyone who's played the games know otherwise .. There's a lot of variables to do with footage.

    Sega Saturn Magazine from the UK had extensive coverage of the pre-release Dreamcast tech demo that showed off some of the hardware effects
    Yes the Irimajiri-san tech demo . It's not true Bump-mapping and speaking of the UK. EDGE (or it may have been the likes of Games Master) did a big feature and interview with the Climax graphics team about Blue Stinger and in that interview the Team confirmed it wasn't true Bump-Mapping and they weren't going to use the effect as it hit the Hardware too much they were going for the most amount of polygons they could with light soruce .

    Looks like you may be right on that one
    Well the clue is in the name. Sure it was called XBox for a reason .

    He rejects *everything* that makes anything sound impressive
    No I just tend to deal in the real world and what real games deliver on the Hardware in question .

    FMV on the Sega CD is a trick
    Its not a trick at all . More of a trick on the Mega Drive would be the 180 degree rotation trick that the likes of Treasure and Core used

    Good points. Several PS2 racers also seem to have environment mapping while most of the DC ones doesn'
    Yep and ever saw any Heat Haze effects in any DC racer or come to think of it any DC game at all (well bar Maken X)
    Panzer Dragoon Zwei is
    one of the best 3D shooting games available
    Presented for your pleasure

  2. #377
    Road Rasher
    Join Date
    Oct 2012
    Posts
    378
    Rep Power
    29

    Default

    Quote Originally Posted by Barone View Post
    We're probably talking about something in the 50-150 (maybe a bit more, IDK) range of games, projecting numbers based on what this September 1999 article states:
    "Currently, more than 70 titles in development for the Dreamcast are making use of the Win CE development environment. If Microsoft's tools live up to expectations, we can hopefully expect great things from these titles."

    http://www.ign.com/articles/1999/09/...ced-for-the-dc
    Darn if that number is related to what rusty talked about then that a pretty huge number of games! Sounds like Sega could of done with a better partner than a doublecrossing backstabbing one like Microsoft.
    I find it ironic that people like to lambast Sony for "killing the Dreamcast" when it sounds like it was Microsoft that managed to find a far more effective way of killing it.
    I guess if the Dreamcast was a victim of an assault and was in a hospital then was probably Sony who put it in there, but it was Microsoft that snuck in to the hospital room and "suffocated" the DC with a pillow, by keeping all those completed games unreleased.

  3. #378
    Wildside Expert
    Join Date
    Jan 2014
    Posts
    145
    Rep Power
    9

    Default

    Quote Originally Posted by Barone View Post
    The more I read about the PS2 hardware design the more I feel that it was intended to give as many alternatives as possible to the developer in order to offload the CPU from most of the rendering-related tasks and make the game run as fast as possible.
    I mean, it seems to have been designed to provide a plethora of possibilities for graphical effects without hurting the frame rate.
    Yeah, that's pretty much how the hardware seemed to be. I was thinking about this on the way to work this morning, and how modern hardware is more and more sanitized and forces you into certain ways of working. It's not a bad thing, just a sign of the times. There's a lot you can do with the programmable pipe-lines despite their rather narrow view of the data that they're dealing with.

    Quote Originally Posted by Barone View Post
    Oh, sorry for that. But some people here usually go apeshit when say that the PS2 capabilities for polygon rendering were clearly ahead of the DC ones and visual examples seem to fail to convince them; so cold hard numbers are all that they usually like to "bite".

    But, really, cold comparisons of hardware specs usually leads to many misconceptions and assumptions about how well the hardware will perform. However, it's not an easy task to convince people to pay attention to how the hardware specs actually play in practical situations, how the design of the system can influence the overall performance much more than bigger or lower numbers...
    No worries. We never gave a damn about poly counts. I don't see why people care, because the tests done by hardware manufacturers are completely rigged. They draw small 3x3 triangles without having to clip them etc. It's such a bs measurement. Look at the bus width, and frequency and you'll get a better idea of how many pixels it can render in a single frame. That's a much better benchmark.


    Quote Originally Posted by Barone View Post
    Very interesting info there. By "from day 1" you mean since the official release of the PS2 or even prior to that Sony already admitted to the 3rd parties that their AA system was broken?
    'Cause launch titles like Namco's Ridge Racer V ended up being released without any sort of workaround for the lack of AA as it seems... I mean, I always supposed that they waited until the last minute for a hardware revision that would fix the problem prior to the console's release and ended up being screwed without, having to smoke a bit of their reputation by releasing their major franchise with lots of jaggies.
    The first Virtua Fighter 4 port is also a serious offender in such aspect despite being released in 2002.
    My memory is a bit hazy. But probably at some point before the final dev hardware was available. This tends to be quite late in the day, like a couple of months before they start burning discs for software. They probably realized when they had the final wafers that it was fubar. but Ridge Racer V was too late in development to change that.



    Interesting. Yeah, I love how you use fake High Dynamic Range in Burnout 2, it still looks good today IMO.
    Do you think that it would be possible/feasible to implement fake-HDR in Dreamcast racing games as well? 'Cause I don't remember playing any DC games with that effect.
    I think that you could. But the fill-rate might have been a blocker. See, the bloom was just some sort of material tag encoded in the alpha channel of the back buffer. Then it was reduced a couple of times, just reading and writing the alpha channel and then scaled up and multiplied by the rbg of the back buffer. You might have had to wrangle Kamui2 a little (the Dreamcast's graphics API) but I think it supported the required blend modes.


    I remember how a lot of people were going crazy in gaming forums when it was announced that the GameCube would support one-pass multitexture. I guess that was another case where people should be trying to understand what that meant rather than going crazy reading the specs...
    The same for S3TC hardware support on the GC, 'cause, if I'm not misunderstanding what you said, you could also have fixed-rate data compression on the PS2 with similar results just by using VQ-based schemes. Of course, you would have to implement it though.
    That's absolutely correct. The great thing about the PS2 implementation was that you could generate MIP maps on the fly, too. With the DC you;d have several VQ textures for the MIPs.


    Still about how you could exploit the fill-rate capacity of the PS2, I think Emboss Bump Mapping could also be done with no significant hit to the performance.
    Did you use it in the Burnout games on the PS2 or any other game that you developed on the console (there are dozens of threads in different gaming forums about the subject)?
    It was possible, in fact, more than possible. If I remember rightly, one of the SDK demo's did bump mapping. It's just modulation of the diffuse colour. Think of the GS as a super-computer with 4MB of register space and you're getting an idea of what it can do. (Paraphrasing from a Sony tech conf.)

    What about "DOT3 bump mapping"/normal mapping? EMBM?
    You could do it in a variety of ways. There was a 6-pass method presented by Sony waaaayyy back in the day. But here's info on a 2 and 4 pass method

    Any idea about what actually was the Ubisoft's Geotexture (they hyped about it back then)?
    Not a clue. Might have been mega texturing where they splatted a whole load of textures into VRAM on the fly and concatenated the draw calls into one by for world geom. I'll have to look it up.

    For the comparative sake of this thread, could those techniques be effectively used on the Dreamcast, for an example, in the pavement of the track in a racing game without major impacts in the performance of the system (F-Zero GX on the GC does it in at least one of its tracks)? 'Cause we've seen very little use of bump mapping in Dreamcast games, stuff like a coin in Shenmue.
    I don't think so. There was this issue about the fill-rate that meant you'd probably kill performance, as the bump mapping on the DC was a two-pass method.

    AFAIK you could implement normal mapping in a scene with a single light source without much trouble on the DC but IDK how expensive it could be when used in situations like in F-Zero GX. I also think that the "single light source" limitation could be a deal breaker in many cases but, please, correct me if I'm wrong.
    You could probably do it, but the fill-rate (or lack of it) would kill you dead. Maybe on a half-size back buffer and scale it up. But it'd look messy. The Dreamcast's strength was that it could render nice looking textures. It would have made more sense to stick with that than try something exotic.


    Also, in a previous discussion in this thread, a forum member said that the Dreamcast didn't have hardware support for Bump Mapping and posted a picture from an old magazine (http://farm6.staticflickr.com/5471/1...bfc5912f_c.jpg) to prove that. Despite have been thrashed by other members, I think he (and Hideki Sato) was probably talking about no hardware support for Bump Mapping with per pixel lighting (which used to be considered the "real" one at the time IIRC) in which case he would be right. Any thoughts/comments?
    Well, in a way, Bump Mapping really is per-pixel lighting. You're modulating the RGB intensity, per pixel based on the intensity from the bump map. I mean it's not *lighting* on the DC , but it looked nice. As I said, it was a two pass solution with a specific blend mode. Just like DX, just like OGL, just like PS2..hell just like Gamecube and Wii to. It used the hardware to the modulations. That sounds like hardware bump mapping to me. Just because it's not a specific "bump map" mode, the hardware is still doing it for you.

    But you have to back-project the light source into tangent space to do it properly. Not something you could do on the DC. So not *really* bump mapping. But who would have noticed? I never used it on the DC. I guess that's what's meant by the fact that it's not true bump mapping.

    Sorry...that answer is all over the place, isn't it? Lack of sleep. Basically; it's not "proper" bump mapping on the DC because you're right; it's not using the light to help modulate the bump value per pixel.


    In terms of environment mapping I'd like to ask you if you're using real-time reflections in any of the PS2 Burnout games (like Melbourne House claimed to be using in Grand Prix Challenge) or it was more like a fake dynamic environment mapping done right (Burnout 3 seems to have been improved when compared to Burnout 2 in such aspect)? Back in the days, press published that Gran Turismo 3 was using real-time reflections but it seems like it is also just a well done fake for what I have carefully observed.
    And, again, even fake dynamic environment mapping was something quite rare on the Dreamcast, I just remember about Le Mans (also from Melbourne House) using it but you could easily notice that it was fake (not on par with GT3's, for an example). Would the DC's fill-rate be a problem for dynamic environment mapping or it was more about being CPU-heavy?
    On the PS2, would it be possible to implement dynamic environment mapping with real-time reflections using just/mostly the GS instead of being CPU-heavy?
    For comparison, the PC games of the time like F1 Racing Championship (by Ubisoft) would make your CPU craw if you had switched from "Static Environment Mapping" to "Dynamic Environment Mapping" in the options menu.
    So environment mapping is just a multi-texture technique, right? You take the env map which is pre generated or rendered dynamically, work out the UV coordinates based on the viewing angle to the vertex normals and maybe and also calculate the blending factor. Then you just do an alpha blend. But to do that on the DC, you would have needed to draw every environment mapped triangle two times...it's a CPU killer as well as fill-rate killer.

    The big problem with dynamic environment mapping on the PS2 was that you had to push so much stuff through the VIF. That would have been the bottle neck, and I think on B3, we just streamed in the fake environment map that had been rendered by the tools with the rest of that streamed section of the track. Oh yes...all the Burnout games streamed in the track on demand. You could do it, but if the faked version looked good enough - run with it! Who's going to notice at the speeds you're driving?

    Please, if you could provide a similar comparative brief analysis about the possibility of implementing other effects that you used in Burnout games, like Radial Blur (which I also don't remember have been used in any Dreamcast game), on the Dreamcast it would be really cool.
    Are you thinking perhaps of focal blur? Where you'd have one part of the scene "in focus" with the rest all blurred? It's a multi-texture technique and I don't think you could do it on the DC. That's because you need the depth buffer and as you probably know, the PVR2 didn't have a depth buffer.

    I hope that answers the question for you.


    Maybe "memory leak" would ring a bell to your Tomb Raider friend?
    Shhh....he might hear you. Don't make him angry.


    MS also claimed that such Direct3D implementation had been optimized for the PowerVR
    Hah hah hah hah hah. Hah. Hah hah hah hah.

    You're a funny guy.

    Talking about "Pushing stuff through the VIF", when you mentioned that "and we were still uploading massive 256x256 uncompressed textures in one go" I didn't understand why you were doing that in the first place. Were you willing to compress that later on using some more traditional scheme with the IPU and so you were initially uploading 256x256 in order to achieve better compression ratio later or something like that? It just bugged me.
    I wasn't in charge of the rendering stuff, and while I did talk to Alex about this, it was his choice. I had other responsibilities and I didn't want to step on his feet. If he felt he could make it work at 60fps, and he did, then no problem.


    By the way, would you say that the first PA system is what really boosted the looking of the later PS1 games?
    Do you consider the PS2PA a major advantage in terms of pushing the hardware when compared to the DC-related tools?
    Oh hell yes. With the DC it was a case of binary-cut when trying to work out what was running slowly. With the PS2 PA, could see how certain systems were stalling and why.

    I have a funny story about the PS1 PA. In my first job, one of the company founders still dabbled in coding. He came in one weekend while I was working on the DC stuff, and worked away for about maybe an hour. In all this time, he didn't turn on the TV attached to his PS1 PA. He then sent an e-mail to the team, telling us that he'd removed all of the stalls in the rendering code using the PA as a guide and it now ran 80% faster than before. Come Monday morning, the lead programmer goes ape poo. And I mean completely nuclear. He sent a reply to the boss and it went a little like this "Of course there are no stalls, because it no longer draws the f***ing level you c**t. It shows nothing. Did you not even turn on your f***ing TV? Don't EVER touch our code again!"

    And he never did. In fact, it was the last time the founder touched ANY code.

    This (by Simon Fenney, one of the designers of Dreamcast's graphics chip) probably explains, at least partially, why the video output of the Dreamcast looked so vibrant:
    Nice find! Thanks for sharing!



    As a bit off-topic question/story, since you worked at Criterion, did you know/worked with someone form the early days of the company?
    I joined in 2002, and I think most of the really original guys had left. Maybe Paul, but he then went to Sony SoHo after about 9 months of me being there.
    Last edited by rusty; 01-30-2014 at 04:29 PM.

  4. #379
    Hard Road! ESWAT Veteran Barone's Avatar
    Join Date
    Aug 2010
    Location
    Brazil
    Posts
    6,704
    Rep Power
    138

    Default

    stu, wherever MS puts its fingers you'll "coincidentally" see a lot of "crazy stuff" happening.

    Remember when Netscape Navigator was ruling the world?
    Remember when Voodoo cards would point Direct3D and laugh?
    Remember when 3dfx would provide graphics chips for Sega to run in a MS OS?
    Remember when OpenGL was the cooler guy and had all the hottest girls?
    Remember when Linux was said to be a potential threat to Microsoft's Fourth Reich?
    Remember when Java was cool and pissed all over Visual Basic?
    Remember when stuff like Ogre3D had academic use and MS was evil (MS "convincing" academic researchers to force their students to use XNA for whatever 3D in their works and papers? Yeah, I saw that one happening... )?

    "Coincidentally" a lot of "crazy stuff" happened.

  5. #380
    Road Rasher
    Join Date
    Oct 2012
    Posts
    378
    Rep Power
    29

    Default

    Quote Originally Posted by Barone View Post
    stu, wherever MS puts its fingers you'll "coincidentally" see a lot of "crazy stuff" happening.

    Remember when Netscape Navigator was ruling the world?
    Remember when Voodoo cards would point Direct3D and laugh?
    Remember when 3dfx would provide graphics chips for Sega to run in a MS OS?
    Remember when OpenGL was the cooler guy and had all the hottest girls?
    Remember when Linux was said to be a potential threat to Microsoft's Fourth Reich?
    Remember when Java was cool and pissed all over Visual Basic?
    Remember when stuff like Ogre3D had academic use and MS was evil (MS "convincing" academic researchers to force their students to use XNA for whatever 3D in their works and papers? Yeah, I saw that one happening... )?

    "Coincidentally" a lot of "crazy stuff" happened.
    Oh yeah, then these other companies/products "magically" lose out to the Mighty Microsoft. It sure is a crazy crazy world!

  6. #381
    ESWAT Veteran Da_Shocker's Avatar
    Join Date
    Apr 2009
    Location
    Cashville,TN
    Age
    35
    Posts
    5,047
    Rep Power
    61

    Default

    Rusty has provided us with very interesting info on here.
    Quote Originally Posted by Zoltor View Post
    Japan on the other hand is in real danger, if Japanese men don't start liking to play with their woman, more then them selves, experts calculated the Japanese will be extinct within 300 years.

  7. #382
    Hero of Algol TrekkiesUnite118's Avatar
    Join Date
    May 2010
    Age
    28
    Posts
    7,554
    Rep Power
    94

    Default

    Quote Originally Posted by Team Andromeda View Post
    Back in 2001 when the XBox came out Plasma or LCD was a pipe dream and out of the price for most consumers and even in 2004 not many had TV's that supported 480p .
    VGA Cables say hi.

    Quote Originally Posted by Team Andromeda View Post
    No at all , people like you just going by Youtube may think Tourning Car looks good and runs ok -Anyone who's played and own the game would tell you its a heap of shit in the framerate Dept. Also Youtube doesn't show up things like games boarders and so on . So many may think the likes of Wave Race or Daytona USA were full screen , anyone who's played the games know otherwise .. There's a lot of variables to do with footage.
    Even a youtube video will show that Touring Car has frame rate problems. I do own the game though, and it plays fine if you use the Analog controller. Honestly I have more fun playing it than Sega Rally.

    And Youtube will show you game borders if the the borders haven't been edited out:



    My god look! We can see overscan borders on the sides and the vertical borders that reduce the rendering window!

  8. #383
    Outrunner
    Join Date
    Sep 2012
    Posts
    615
    Rep Power
    14

    Default

    Quote Originally Posted by rusty View Post
    Well, in a way, Bump Mapping really is per-pixel lighting. You're modulating the RGB intensity, per pixel based on the intensity from the bump map. I mean it's not *lighting* on the DC , but it looked nice. As I said, it was a two pass solution with a specific blend mode. Just like DX, just like OGL, just like PS2..hell just like Gamecube and Wii to. It used the hardware to the modulations. That sounds like hardware bump mapping to me. Just because it's not a specific "bump map" mode, the hardware is still doing it for you.
    In fact even the Saturn could do it: there's a demo where it draws a cube with paletted textures (which were very common because RGB ones ate the poor fillrate too fast), except that every palette entry is not one entry but a gradients worth, and it moves up and down on those ramps per pixel as the light moves around. It could be done in hardware because of a "bug" in how gouraud shading worked on the system. RGB textures were 15bit, and gouraud shading just combined the rgb values of the pixels with that of the gouraud shading lookup table (a sort of primitive multitexturing). Paletted entries could be 8bit, and the palette entries were around the same spot as where the red colour would be in a RGB pixel. Gouraud shading treated the pixel data as if it was RGB even if it wasn't, so apply red only gouraud shading and you make the hardware change the palette reference number, and if you have your palette entries set up anticipating this you can do lightning effects.
    If you have some palette gradients set up so they ramp in one direction while others ramp in the other way, you can get a nice chrome effect that kinda sorta looks like bump mapping.

    However you run out of palette entries fairly quick with more colourful stuff (there are only 2048 of them).

    And the Gouraud shading with palettes was done from day 1, it is even used in the CD player, on the VU meter cubes.

    The Saturn was kind of like the PS2 in that sense. Each part couldn't do much on its own, but if you mixed each function you could do things that no other console at the time could. If they kept it alive for another 1-2 years, it could've done some mind blowing things.


    Nice find! Thanks for sharing!
    It makes sense that it looks more colourful due to using a smaller colourspace. It was also true for the Saturn - people often claimed how it had more vibrant colours, which was because you had to use 4-bit paletted textures to keep the speed up. And also no dithering like on the Playstation.


    Quote Originally Posted by Barone View Post
    stu, wherever MS puts its fingers you'll "coincidentally" see a lot of "crazy stuff" happening.

    Remember when Netscape Navigator was ruling the world?
    Remember when Voodoo cards would point Direct3D and laugh?
    Remember when 3dfx would provide graphics chips for Sega to run in a MS OS?
    Remember when OpenGL was the cooler guy and had all the hottest girls?
    Remember when Linux was said to be a potential threat to Microsoft's Fourth Reich?
    Remember when Java was cool and pissed all over Visual Basic?
    Remember when stuff like Ogre3D had academic use and MS was evil (MS "convincing" academic researchers to force their students to use XNA for whatever 3D in their works and papers? Yeah, I saw that one happening... )?

    "Coincidentally" a lot of "crazy stuff" happened.
    Yeah, most of those things shot themselves in the foot while Microsoft just marched on.
    Netscape had a codebase that collapsed like a house of cards and needed a complete rewrite - you may have heard about it, it is called Firefox these days, and it is STILL a resource hog that starts up slowly just like Netscape used to.
    3DFX thought they were the kings of the world with the Voodoo, and they totally missed up on the OEM market and had no funds to continue, plus a lot of bad R&D choices.
    OpenGL couldn't modernize itself efficiently and became less and less useful. There's a giant power about it somewhere, I think Stack exchange.
    Linux is was only ever useful for servers. It is going strong there even to this day.
    And Java was never good.
    The only thing MS is guilty of is having retarded upper management who are out of touch with modern computing. Everything else is just business, and every other big company does the same.

  9. #384
    Mastering your Systems Hero of Algol TmEE's Avatar
    Join Date
    Oct 2007
    Location
    Estonia, Rapla City
    Age
    27
    Posts
    9,925
    Rep Power
    102

    Default

    I'll chime in a little bit about the bumpmapping thing on the Dreamcast. Get to the first Knuckles stage in Sonic Adventure 2, and go tho those movable blocks with faces on them. Those blocks are just cubes, but the texture on it seems to be bump mapped, you can see how the nose on it looks volumetric, even though it really is flat when you move the camera to show the other sides of the cube. I have not taken any more indepth looks but that is one place where it certainly seems to be used.
    Death To MP3, :3
    Mida sa loed ? Nagunii aru ei saa "Gnirts test is a shit" New and growing website of total jawusumness !
    If any of my images in my posts no longer work you can find them in "FileDen Dump" on my site ^

  10. #385
    WCPO Agent
    Join Date
    Jul 2006
    Location
    Birmingham, UK
    Age
    35
    Posts
    930
    Rep Power
    20

    Default

    Quote Originally Posted by Team Andromeda View Post
    Bump Mapping .
    Thanks for proving what I think quite a lot of us suspected: you know nothing. A technical term cannot be self defining and you can't give a definition because you are merely copying what an old magazine, probably translated from Japanese with an interviewer who wouldn't have been knowledgeable enough to ask more precise technical questions, said.
    Last edited by Silanda; 01-31-2014 at 03:46 AM.

  11. #386
    Raging in the Streets azonicrider's Avatar
    Join Date
    May 2013
    Location
    British Columbia
    Posts
    2,588
    Rep Power
    37

    Default

    Imagine walking around town, with a draw distance thats the size of Daytona's.
    Certified F-Zero GX fanboy

  12. #387
    Wildside Expert
    Join Date
    Jan 2014
    Posts
    145
    Rep Power
    9

    Default

    Quote Originally Posted by zyrobs View Post
    In fact even the Saturn could do it: there's a demo where it draws a cube with paletted textures (which were very common because RGB ones ate the poor fillrate too fast), except that every palette entry is not one entry but a gradients worth, and it moves up and down on those ramps per pixel as the light moves around. It could be done in hardware because of a "bug" in how gouraud shading worked on the system. RGB textures were 15bit, and gouraud shading just combined the rgb values of the pixels with that of the gouraud shading lookup table (a sort of primitive multitexturing). Paletted entries could be 8bit, and the palette entries were around the same spot as where the red colour would be in a RGB pixel. Gouraud shading treated the pixel data as if it was RGB even if it wasn't, so apply red only gouraud shading and you make the hardware change the palette reference number, and if you have your palette entries set up anticipating this you can do lightning effects.
    If you have some palette gradients set up so they ramp in one direction while others ramp in the other way, you can get a nice chrome effect that kinda sorta looks like bump mapping.

    However you run out of palette entries fairly quick with more colourful stuff (there are only 2048 of them).

    And the Gouraud shading with palettes was done from day 1, it is even used in the CD player, on the VU meter cubes.
    That's pretty cool. My first employer sold off a load of old equipment once, and I bought a retail Saturn with a replay cart that they had used for Saturn development. I think it's still at my parents place...reading this sort of stuff makes me want to try out Saturn dev. Unfortunately, I have commitments in my free time at the moment (writing a book) so pet projects like that will have to wait!

    Quote Originally Posted by azonicrider View Post
    Imagine walking around town, with a draw distance thats the size of Daytona's.
    Well...most people are glued to their smartphones these days, so they sort of do walk around with a pretty minimal draw distance!

  13. #388
    Outrunner
    Join Date
    Sep 2012
    Posts
    615
    Rep Power
    14

    Default

    Oh, you mean the Cartdev? That thing is nice. Saturn devkits later on were retail Saturns with a rotary region switch, a virtual CD interface, and a NMI cable modded on to them, plus the Cartdev for debugging code (a card that goes into the Saturn, with a huge cable attached to a debugging box that you can hook up to PC). I do believe it needs a PC interface card, at least for the Virtual CD. Not sure what interface the Cartdev used though, either SCSI or D9 serial?

    The early units (Sophia) however were giant boxes that predated the retail systems by half a year...

    Nowadays, you can also buy home made Saturn USB datalink carts. I don't know if they are any good for debugging, but they can transfer code very fast.

  14. #389
    Wildside Expert
    Join Date
    Jan 2014
    Posts
    145
    Rep Power
    9

    Default

    Quote Originally Posted by zyrobs View Post
    Oh, you mean the Cartdev? That thing is nice. Saturn devkits later on were retail Saturns with a rotary region switch, a virtual CD interface, and a NMI cable modded on to them, plus the Cartdev for debugging code (a card that goes into the Saturn, with a huge cable attached to a debugging box that you can hook up to PC). I do believe it needs a PC interface card, at least for the Virtual CD. Not sure what interface the Cartdev used though, either SCSI or D9 serial?
    It's a D9 serial port.

    Quote Originally Posted by zyrobs View Post
    The early units (Sophia) however were giant boxes that predated the retail systems by half a year...
    There's this story I heard about how SN systems got started.

    The Sophia units were a bunch of big CPU emulators (possibly with JTAG but that's a guess on my part) all stuck together in a big box. Anyhow one of the founders of SN, possibly Martin Day, visited Sega and they proudly showed him the Sophia system. The story goes that he told them how impressed he was with it, took out his retail system with a PCB in the cart slot and said "this is my version". I think it was possibly with SN's help that they did that, because all the Saturn tools from what I remember, were written by SN.

    I could be completely mixing things up though.


    Quote Originally Posted by zyrobs View Post
    Nowadays, you can also buy home made Saturn USB datalink carts. I don't know if they are any good for debugging, but they can transfer code very fast.
    [/quote]

    Oh, that sounds awesome. Do you know if it allows bi-directional coms? Can it be used to send byte streams to the host for logging purposes? Because often, that's all you need and for a long time it's all I had in some situations.

  15. #390
    Hard Road! ESWAT Veteran Barone's Avatar
    Join Date
    Aug 2010
    Location
    Brazil
    Posts
    6,704
    Rep Power
    138

    Default

    Quote Originally Posted by rusty View Post
    Yeah, that's pretty much how the hardware seemed to be. I was thinking about this on the way to work this morning, and how modern hardware is more and more sanitized and forces you into certain ways of working. It's not a bad thing, just a sign of the times. There's a lot you can do with the programmable pipe-lines despite their rather narrow view of the data that they're dealing with.
    To be very honest, I don't like the way things have changed since the early 2000s, both in terms of hardware and games. Of course, the actual hardware is more powerful and everything is safer is terms of programming but it just doesn't have that same sort of challenge and a bit of amateurish feeling IMO. Yes, this is a nostalgic point of view but I think it's OK for a Sega-16 member.
    The sort of art direction that I've seen in most of these last two gens is something that push me away from many games. I've a hard time accepting that someone's hair or skin can have mirror-like texture reflections at times or it's sort of glowing all the time in many games - it's a shitty use of technology IMO, to give you a brief example.



    Quote Originally Posted by rusty View Post
    No worries. We never gave a damn about poly counts. I don't see why people care, because the tests done by hardware manufacturers are completely rigged. They draw small 3x3 triangles without having to clip them etc. It's such a bs measurement. Look at the bus width, and frequency and you'll get a better idea of how many pixels it can render in a single frame. That's a much better benchmark.
    I'll take your advice.



    Quote Originally Posted by rusty View Post
    My memory is a bit hazy. But probably at some point before the final dev hardware was available. This tends to be quite late in the day, like a couple of months before they start burning discs for software. They probably realized when they had the final wafers that it was fubar. but Ridge Racer V was too late in development to change that.
    Thanks.



    Quote Originally Posted by rusty View Post
    I think that you could. But the fill-rate might have been a blocker. See, the bloom was just some sort of material tag encoded in the alpha channel of the back buffer. Then it was reduced a couple of times, just reading and writing the alpha channel and then scaled up and multiplied by the rbg of the back buffer. You might have had to wrangle Kamui2 a little (the Dreamcast's graphics API) but I think it supported the required blend modes.
    It sounds like quite limited use in most of cases then.



    Quote Originally Posted by rusty View Post
    That's absolutely correct. The great thing about the PS2 implementation was that you could generate MIP maps on the fly, too. With the DC you;d have several VQ textures for the MIPs.
    Please, correct me if I'm wrong:
    - Using MIP maps will also improve the texture rendering performance.
    - On the DC you'd have to create the MIP maps previously (using dev tools) and would use them as part of your materials, uploading the reduced textures to VRAM (usually while loading each stage/part of the game since, on the DC, uploading to VRAM on the fly is slower and much more limited than on the PS2). The use of mipmapping would represent an increase of about 30% in terms of RAM space required for texture usage which is probably why most of the DC developers avoided using it.

    Also, wasn't the GS MIP map generation routine a bit of problematic/poor in terms of the way it calculated the LOD?
    Wasn't the hardware implementation a simplified formula which would be based on how far was a pixel from the camera instead of how fast the texture mapping coordinates changed from pixel to pixel (which usually is considered a better approach)?
    Were you calculating the LOD using the default GS routine or did/could you use a custom implementation somehow?

    AFAIK, most of PS2 games also don't use mipmapping (especially the games developed in Japan) but many do. However, several of those games using mipmapping seem to have a bad/poor LOD while a few ones seem to have found a workaround.



    Quote Originally Posted by rusty View Post
    It was possible, in fact, more than possible. If I remember rightly, one of the SDK demo's did bump mapping. It's just modulation of the diffuse colour. Think of the GS as a super-computer with 4MB of register space and you're getting an idea of what it can do. (Paraphrasing from a Sony tech conf.)
    Sounds good.



    Quote Originally Posted by rusty View Post
    You could do it in a variety of ways. There was a 6-pass method presented by Sony waaaayyy back in the day. But here's info on a 2 and 4 pass method
    Thanks a lot for sharing this.


    Quote Originally Posted by rusty View Post
    I don't think so. There was this issue about the fill-rate that meant you'd probably kill performance, as the bump mapping on the DC was a two-pass method.
    Yep, no F-Zero GX-like then.



    Quote Originally Posted by rusty View Post
    You could probably do it, but the fill-rate (or lack of it) would kill you dead. Maybe on a half-size back buffer and scale it up. But it'd look messy. The Dreamcast's strength was that it could render nice looking textures. It would have made more sense to stick with that than try something exotic.
    The fill-rate seems to be the answer why "coincidentally" most of the DC games were poor in terms of special effects.



    Quote Originally Posted by rusty View Post
    So environment mapping is just a multi-texture technique, right? You take the env map which is pre generated or rendered dynamically, work out the UV coordinates based on the viewing angle to the vertex normals and maybe and also calculate the blending factor. Then you just do an alpha blend. But to do that on the DC, you would have needed to draw every environment mapped triangle two times...it's a CPU killer as well as fill-rate killer.

    The big problem with dynamic environment mapping on the PS2 was that you had to push so much stuff through the VIF. That would have been the bottle neck, and I think on B3, we just streamed in the fake environment map that had been rendered by the tools with the rest of that streamed section of the track. Oh yes...all the Burnout games streamed in the track on demand. You could do it, but if the faked version looked good enough - run with it! Who's going to notice at the speeds you're driving?
    Hehehe
    I noticed it!

    Thanks a lot for that explanation.



    Quote Originally Posted by rusty View Post
    Are you thinking perhaps of focal blur? Where you'd have one part of the scene "in focus" with the rest all blurred? It's a multi-texture technique and I don't think you could do it on the DC. That's because you need the depth buffer and as you probably know, the PVR2 didn't have a depth buffer.
    My bad, yes, focal blur not radial blur.

    I guess that this is not enough to have a feasible workaround to that on the DC, right?
    http://home.scarlet.be/~pin10741/ISPexpl.htm




    Quote Originally Posted by rusty View Post
    Shhh....he might hear you. Don't make him angry.
    Ahahaha
    "Allocation error".


    Quote Originally Posted by rusty View Post
    Hah hah hah hah hah. Hah. Hah hah hah hah.

    You're a funny guy.
    Heck, it was THAT bad?
    I guess that it was more like SGL-only if you wanted to try to squeeze anything from the DC in terms of polygon rendering... Though, the "optimized" Direct3D was probably enough for half-improved N64 ports and half-assed PC ports.

    Also, talking about poor looking early games and DC-to-PS2 ports:
    ""There are three main chips that you use on the PS2 for computing potential. There's the CPU chip, which is a pretty powerful CPU. There's VU0 [Vector Unit 0] and VU1 [Vector Unit 1]," says Jason Rubin of Naughty Dog. "The CPU of the PlayStation 2 is 100 to 150MHZ slower than the Gamecube. So the base CPU is a slower piece of hardware. However, if you only use that, that would be the equivalent of driving a 12-cylinder car and using only six of its cylinders. It's not the way you do it correctly."

    While first-generation PS2 software has already made great use of the console's 300MHZ "Emotion Engine" CPU, developers have had a much more difficult time tapping into the chip's Vector Units, and because of that early software has suffered from sometimes lackluster graphics. And the only way to truly get the most out of Sony's console is to dig deep and tap into the Vector Unit extensions of the CPU, a feat that has yet to be fully accomplished, according to Rubin. "There are companies that are doing it, but none of the games have shown up yet," he says."

    ""A lot of the first software that came out, because developers were either porting from Dreamcast or doing something really quick, has only used the base CPU because that's the easy thing to do," says Rubin. "Once you get into it though, you start writing code that's sharing the two processors, either the CPU and the VU0 or the CPU and the VU1. And the VU1 is more powerful in a lot of ways than the VU0. So what we've done here is we have the CPU and the VU0 code as a combination, and then we have VU1 code that stands on its own.""

    ""In our game, for example, VU1 is doing everything from our character joint stuff, to our background, to our foreground, and to our particle system right now, so it's quite powerful," Rubin explains. "Meanwhile VU0 is co-processing right now with the CPU to do our collision detection, all of our enemy AI, and a lot of other stuff." The main CPU processor speed gained by using the Vector Units effectively can be anywhere from 20% to 100% faster, according to Rubin. So, if PS2's CPU is being used 100% before tapping into the Vector Units, smart developers can drop that number down to approximately 1% under the right conditions and with a little luck -- a very impressive revelation."

    http://www.ign.com/articles/2000/11/...-playstation-2

    Any comments about that?



    Quote Originally Posted by rusty View Post
    I wasn't in charge of the rendering stuff, and while I did talk to Alex about this, it was his choice. I had other responsibilities and I didn't want to step on his feet. If he felt he could make it work at 60fps, and he did, then no problem.
    Later I came to think that you're probably uploading the 256x256 textures due to the MIP map generation.



    Quote Originally Posted by rusty View Post
    Oh hell yes. With the DC it was a case of binary-cut when trying to work out what was running slowly. With the PS2 PA, could see how certain systems were stalling and why.
    Well, VC6's Debugger was actually quite buggy and far from being as easy to use as the stuff we had since VC8 and on. I think people also never think about it.
    Profilers for Visual Studio are common now and do wonders, but back in VC6 times I think there wasn't any of them and you probably couldn't even debug DLLs and stuff like that as you can now.
    So compared to a full-blown profiler which could work with low-level programming, yep, there was a huge gap.


    Quote Originally Posted by rusty View Post
    I have a funny story about the PS1 PA. In my first job, one of the company founders still dabbled in coding. He came in one weekend while I was working on the DC stuff, and worked away for about maybe an hour. In all this time, he didn't turn on the TV attached to his PS1 PA. He then sent an e-mail to the team, telling us that he'd removed all of the stalls in the rendering code using the PA as a guide and it now ran 80% faster than before. Come Monday morning, the lead programmer goes ape poo. And I mean completely nuclear. He sent a reply to the boss and it went a little like this "Of course there are no stalls, because it no longer draws the f***ing level you c**t. It shows nothing. Did you not even turn on your f***ing TV? Don't EVER touch our code again!"

    And he never did. In fact, it was the last time the founder touched ANY code.
    lmao
    That sort of overreacting-prone environment is what makes the programming labs/offices so rewarding IMO, ahahahah.


    Quote Originally Posted by rusty View Post
    I joined in 2002, and I think most of the really original guys had left. Maybe Paul, but he then went to Sony SoHo after about 9 months of me being there.
    Oh, OK.

Thread Information

Users Browsing this Thread

There are currently 2 users browsing this thread. (0 members and 2 guests)

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •