Quantcast

Page 56 of 66 FirstFirst ... 646525354555657585960 ... LastLast
Results 826 to 840 of 986

Thread: Comparison of 5th generation ("32/64-bit") game console hardware

  1. #826
    Hero of Algol kool kitty89's Avatar
    Join Date
    Mar 2009
    Location
    San Jose, CA
    Age
    30
    Posts
    9,725
    Rep Power
    64

    Default

    Quote Originally Posted by gamevet View Post
    @Kool Kitty

    I was not the one that had brought up the cache size, that was ABF. And yes, I'm well aware of the 2k cache size for the PSX.

    There were later games (Conker's Bad Fur Day) that did tricks to allow 2 texture passes on a polygon, but those were very few. I think it's pretty obvious from a design point, that they weren't thinking that the carts could hold a whole lot of texture data and just looking at the textures in most games, you would see a lot of recycled textures between levels.
    Quote Originally Posted by A Black Falcon View Post
    Read the whole section. Start with the first sentence, where he says that your argument is "not good".


    He clearly says that no, your idea that the cartridge medium had anything to do with the poor cache design is wrong.

    And as I said I entirely agree on that point. Also once again remember the 64DD; the N64 was designed from the beginning for larger sized games than the 8MB-only launch titles. The texture cache was poorly designed and flawed, but that poor design choice was just that, and had nothing to do with the choice to use cartridges. SGI and Nintendo didn't know, when they were designing it, that it'd be a problem, clearly. And as he says, other 3d hardwares from around that time also had some similar choices.
    Yes, though I still maintain that ROM sizes most definitely did have an impact on texture res/quality, and variety. (even for cases just using 1 64x32 or 32x32 texture per polygon, there's the possibility of having a much larger number of unique textures in general, and using higher color depth textures -like more 8-bit/256 color and fewer 4-bit/16 color textures, not to mention direct color 16-bit RGBA textures)
    Using both a larger number of small "normal" size textures is even more useful with a higher polygon budget (like with Turbo 3D and some others -like Rare and Factor 5) since there's going to be a larger number of smaller polygons as such. (plus the higher poly count models combined with more textures would be another place where ROM size would more quickly become a bottleneck, and as always, the Expansion Pak would give even more potential advantage to CD-ROM)


    The texture cache size and even the software caching mechanism should have nothing to do with this problem directly. (in fact, the software caching in theory should allow for more potential workarounds)

    However, there may be genuine hardware limitations that makes using higher res textures per-polygon too, I'm not sure. (it certainly could be more an issue with the RSP code used and/or RSP+RDP drivers, or maybe even the API itself, I'm not sure) Had Nintendo opened up RSP and RDP documentation and included decent low-level tools, we'd probably have an obvious answer (and given some of the custom driver/microcode/API implementations DID work around the typical cache limitations, there's some evidence at least).
    There's been a fair amount of leaked documentation and reverse engineering since then in the N64 homebrew community, so there may be an answer there too.




    In any case, I really don't see any real logic behing claims that the SGI Reality chipset was specifically designed around the limitations of small memory sizes. After all, the thing had been well into the hardware design phase (ie beyond paper) before SGI even partnered with Nintendo, and, on top of that, I rather doubt Nintendo immediately decided to focus plans for carts only in 1993 (they didn't officially abandon the SNES CD project until '94 as it was, and then there's the plans for the DD format beyond that). And all those PR claims of SGI using ROM carts to do things "only possible with silicon cartridge" was pure marketing BS, of course. (and like most good marketing lies, it's built on a sliver of truth, just taken way, way out of context and twisted completely one-sided to focus on the few advantages and ignore all the disadvantages)
    It's also hard to believe there's absolutely no support for uncached texture rendering entirely . . . sure if it wasn't heavily optimized for it would be a good bit slower, but not useless.



    Actually, the whole issue of games tending towards "larger numbers of smaller polygons" (ie more complex models, etc) was probably a major reason for the size of texture caches in many GPUs not changing all that much from 1995 to 1999. Albeit there was additional optimization for uncached textures and beyond that texture mapping directly from main system RAM (AGP texture mapping).
    Albeit for many "lazy" ports of console games, that hampered optimized performance, since most of those kept model detail the same and texture res often the same too. (color depth, and resolution were the main enhancements, sometimes draw distance)



    On a related note, one other interesting consideration is caching palettized texture data directly rather than decoding that into direct color data for the cache. I'm not sure if any GPU architectures actually do that, but it seems like a really interesting method for making the most of small caches. (the same could apply to smaller buffers -like FIFO cues- for uncached texture fetches, and I believe the Lynx and 3DO do just that , , , though it works better there since it uses linear texture reads and nonlinear framebuffer writes, unlike typical blitters/rasterizers, and that's even more important given those systems buffer RLE compressed and paletted texture data, so linear reads are really important)





    And on the N64's vertex performance, I noticed some things in this old article:
    http://n64.icequake.net/mirror/www.w...mon.co.uk/n64/

    Particualrly the quote of "more than half a billion arithmatic calculations per second" which I assumed referred to vector unit performance, and indeed, looking into that further, there's several references to ~.5 GFLOPS for the VU.
    Now, "GFLOPS" alone is a bit vague (doesn't address timing for individual operations), but assuming that counts for 32-bit float multiply, and general per-op performance of other FPUs and VUs (or floating point SIMD units like SSE or 3DNow), that would imply a little better than 1/3 the Dreamcast's peak vertex throughput, and roughly 8x the peak vertex throughput of the PSX's GTE. And that equates to a theoretical peak of roughly 2.5 million 3-point polygons/s. (potentially better than that for polygon fans and such where you don't need to re-calculate shared vertices)
    That figure would be the rough counterpart to the PSX's 1M vertex and 360,000 polygon figures (ie based on raw vertex performance), while actual poly counts might be bound by fillrate or other things. (occassionally triangle-setup might be a bottleneck)
    Last edited by kool kitty89; 09-30-2013 at 10:45 PM.
    6 days older than SEGA Genesis
    -------------
    Quote Originally Posted by evilevoix View Post
    Dude it’s the bios that marries the 16 bit and the 8 bit that makes it 24 bit. If SNK released their double speed bios revision SNK would have had the world’s first 48 bit machine, IDK how you keep ignoring this.
    Quote Originally Posted by evilevoix View Post
    the PCE, that system has no extra silicone for music, how many resources are used to make music and it has less sprites than the MD on screen at once but a larger sprite area?

  2. #827
    Wildside Expert
    Join Date
    May 2011
    Posts
    181
    Rep Power
    11

    Default

    Look at the patents referred to here - http://beyond3d.com/showthread.php?p=1689612

    RSP is 8 16 bit ALUs @62.5MHz , no floats - ( GOPS, not GFLOPS )
    Pixel pipe is one pixel/clock peak @ 62.5MHz

  3. #828
    Hero of Algol kool kitty89's Avatar
    Join Date
    Mar 2009
    Location
    San Jose, CA
    Age
    30
    Posts
    9,725
    Rep Power
    64

    Default

    Quote Originally Posted by Crazyace View Post
    Look at the patents referred to here - http://beyond3d.com/showthread.php?p=1689612

    RSP is 8 16 bit ALUs @62.5MHz , no floats - ( GOPS, not GFLOPS )
    Pixel pipe is one pixel/clock peak @ 62.5MHz
    Hmm, interesting. I wonder if one of the possible differences in "microcode" would have been implementign software floating-point operations using the 16-bit fixed-point VUs and one of the faster alternatives would be using fixed-point computation natively. (that also means that the RSP's VU is more along the lines of an MMX unit than SSE/3DNow! or the Dreamcast's VU -let alone the PS2's VUs) That would also imply that the fastest operations would be 16-bit fixed point math . . . which indeed could still be quite useful for some 3D games. (there's trade-offs of course, but a lot of pre-pentium software rendered stuff -and probably a good chunk of Saturn games- use 16-bit fixed point math exclusively) Hmm, that 500 GOPS figure would apply there too, with 62.5x8. (so that theoretical 8 million vertices/s or ~2.6 million polygons/s figure would apply for 16-bit vertex computation, I suppose the 500~750k figure listed for "turbo 3D" might apply to 32-bit fixed-point math and I can onyl guess that the 100k polygon rate of the original "fast 3D" would be software floating point stuff)
    Could any of the figures relate to geometry transformation being done with the R4300's FPU? (that too much end up closer to the 100k figure)

    Interesting point on the pixel pipeline . . . that would limit it to a similar theoretical peak as the PSX (which can draw 2 pixels per cycle, unless I'm mistaken).

    That would also imply that the RDP is pretty fast per-clock in most of its features and operations and probably wouldn't gain that much by disabling many of the features, and that it had a good bit of buffering on the pixel output as well to make efficient use of burst writes since you shouldn't be able to saturate the bus, and if you can't come close to saturating peak burst bandwdith (and make efficient serial burst CPU/RDP/RSP bus sharing), then there's no reason for that super fast RDRAM at all.

    I mean, you've got 500 MHz 9-bit RDRAM with 562.5 MB/s, and even with 18-bits per pixel framebuffer and 18-bit Z-buffer, at 62.5 Mpixels/s that's only 1/2 the peak bandwidth. So, with reasonably sized fifos (or line buffers) for the pixel output, you could do staggered burst writes for rendering and use only moderately more than 1/2 the bus time (more due to latency overhead -plus bandwidth needed for texture cache fills, command/instruction fetches, etc), leaving the rest for framebuffer scanning, CPU access, and RSP processing. (all of which I'd hope are well buffered too, though the CPU obviously has the 24 kB cache, so that's somewhat covered already)

    For 32-bit (36-bit) rendering with a 18-bit z-buffer, that could consume 75% of peak bandwidth . . . 36-bit Z-buffer would mean 100% saturation at 62.5 Mpix/s, but I doubt that would be necessary (unless it's forced, as with some GPUs -z-buffer depth must match pixel depth) Albeit if you disabled z-buffering entirely, 32-bit rendering would use similar bandwidth as 18-bit rendering with z-buffering. (well, plus overhead for software Z-sorting )


    Also, if any of the features (particularly bilinear filtering) does force a major hit to fillrate, that would imply the RDP would consume much less than 1/2 the bus bandwdith at most . . . unless buffering for burst accesses is weak. Also, it would mean that 32-bit rendering would be less detrimental to performance (bandwidth not being a major bottleneck anyway), though memory usage would still be a consideration.
    And if it CAN do filtered textures at 62.5 Mpix/s, that means it was faster than the Voodoo 1 or any other PC GPU on the market in 1996 for interpolated texture mapping. (granted some were faster with filtering disabled, or at least theoretically faster)
    That, and the 100k polygon/s vertex limit would be the major bottleneck ahead of fillrate in most cases too. (if filtering slowed things down like the ViRGE 325


    On another note, that peak fillrate limit would also imply that the N64 actually isn't any better for 2D than the PSX is, at least in terms of pixel rates. Though if you really optimized a driver (or RSP code for that matter) around 2D style rendering, you should still have some advantages, like using the larger texture cache efficiently for tile and sprite style graphics, not to mention nicer alpha blending effects and 32-bit color among other things. (lighting/shading effects, interpolated scaled/rotated sprites, etc)
    It at least shouldn't be worse than the PSX GPU for that sort of rendering, and would have advantages in simialr areas over the Saturn as the PSX does. (Saturn 2D has the VDP1+VDP2 combination, but less than 1/2 the theoretical peak sprite/blitter throughput and a disadvantage for any games not using VDP2 for more than 1/2 the overall graphics, plus additional limitations in combining VDP1 and VDP2 layers in terms of priority, and especially with alpha blending of individual VDP1 sprites . . . and VDP1's limited half-transparent alpha blending itself)





    Edit:
    Looking at that Beyond3D thread again, this strikes me as odd:
    We timed it at about 64 clock for a read. (I guess that's about 640ns give or take).
    There was no way to prefetch and no read under write.
    Going by the RDRAM clock, RCP clock, or CPU clock, 64 cycles is NOT 640 ns. The slowest of those clocks is the RCP's 62.5 MHz, and that's 1.6 ns cycles, or 102.4 ns. Not great, but not all that terrible compared to typical EDO or even SDRAM latencies by 1996 standards.

    Also, Swaaye's comments about EDO and SDRAM are off too, obviously. EDO (or single-cycle BEDO, rather) was top of the line for PCs in 1996, but SDRAM had been in mass production since '93/94 (the 32x and Saturn used it, obviously ). The PSX also used both EDO DRAM (main) and SGRAM (video) in 1994.
    Last edited by kool kitty89; 10-02-2013 at 04:47 AM.
    6 days older than SEGA Genesis
    -------------
    Quote Originally Posted by evilevoix View Post
    Dude it’s the bios that marries the 16 bit and the 8 bit that makes it 24 bit. If SNK released their double speed bios revision SNK would have had the world’s first 48 bit machine, IDK how you keep ignoring this.
    Quote Originally Posted by evilevoix View Post
    the PCE, that system has no extra silicone for music, how many resources are used to make music and it has less sprites than the MD on screen at once but a larger sprite area?

  4. #829
    Wildside Expert
    Join Date
    May 2011
    Posts
    181
    Rep Power
    11

    Default

    Quote Originally Posted by kool kitty89 View Post
    Hmm, interesting. I wonder if one of the possible differences in "microcode" would have been implementign software floating-point operations using the 16-bit fixed-point VUs and one of the faster alternatives would be using fixed-point computation natively.
    I doubt it - looks like one was 32 bit fixed point , and the other was 16 bit fixed point - after all, the PS1 GTE used 16 bit fixed point for all values.

    Quote Originally Posted by kool kitty89 View Post
    Edit:
    Looking at that Beyond3D thread again, this strikes me as odd:


    Going by the RDRAM clock, RCP clock, or CPU clock, 64 cycles is NOT 640 ns. The slowest of those clocks is the RCP's 62.5 MHz, and that's 1.6 ns cycles, or 102.4 ns. Not great, but not all that terrible compared to typical EDO or even SDRAM latencies by 1996 standards.
    Looks like programmer timing, so based on cpu clock ( closer to 100Mhz )

  5. #830
    Hero of Algol kool kitty89's Avatar
    Join Date
    Mar 2009
    Location
    San Jose, CA
    Age
    30
    Posts
    9,725
    Rep Power
    64

    Default

    Quote Originally Posted by Crazyace View Post
    I doubt it - looks like one was 32 bit fixed point , and the other was 16 bit fixed point - after all, the PS1 GTE used 16 bit fixed point for all values.
    Huh, I wonder why the polygon rates were so low then. Assuming similar per-cycle performance (per integer unit), the RSP should have close to 8x the vertex throughput (~7.5x), so the 500~750k polygon figure seems kind of low for that. It's 500 MOPS vs ~66.8 MOPS on the GTE. (so 8 million 16-bit vertices per second, and 2.67 million 3-point polygons, by the same metric that the PSX's ~66.8 MOPS GTE does ~1.08M V/s or 360k polys/s, or the Dreamcast's SH4 could so 1.4 GFLOPS for ~8 million polygons/s)

    And on another note, the Saturn's SCU uses 32-bit fixed-point math, doesn't it? SH2s could do either (16-bit is 1.5-2x faster though). Either route is still vastly slower than
    the GTE though. Actually, for doing 16-bit vertex computations, the SVP's SSP-1601 would be better at full speed, albeit at 25 MHz rated speed, 26.6/28.6 MHz would be pushing it. (not that there weren't likely other low-cost 16/32-bit DSPs that could have been employed -too bad the SH DSP wasn't avilable yet, 1 SH2 and 1 SH2 DSP would have been pretty nice)

    Looks like programmer timing, so based on cpu clock ( closer to 100Mhz )
    OK, yeah, so more like 68 ns access latency. Also, if that's in the context of the CPU, that's also latency for a full 32-bit access (4 RDRAM accesses), so not directly indicative of the latency of the RAM itself.
    In any case, it's not really worse than contemporary SDRAM or EDO DRAM, let alone FPM. (and typical DRAM controller implementations of the time)

    Wait, I'm stupid, 64/93.75 MHz is ~682 ns, not 68 . . . OK, bad compared to FPM/EDO/SDRAM. (especially considering the Jaguar's 5-cycle ~188 ns with cheap FPM DRAM on a 26.6 MHz bus, and that's the RC time, not just the RAS latency -actual latency is 2 or 3 cycles, I forget . . . 2 cycles would imply going moderately out of spec for the 80 ns DRAMs used, but they did do that for the page-mode accesses already, so that's not impossible)

    And given the 32/16-byte I/D cache line size, that latency would still put a big hamper on I/O performance, since even if subsequent reads taking only 1 cycle, for 32 bytes that's 64+1x7= 71 cycles or 67 cycles for a 16 byte D-cache line. (slow enough that even 32-bit wide 80 ns FPM DRAM should have been at least slightly faster for single-line cache-fills -using similar RC and PC timing as the Jag, that would be 17 cycle RC and 7 cycle PC, or 38 cycles for D-cache line or 66 cycles for I -and in reality, assuming a DRAM controller at CPU speed, probably a bit better than that due to closer to ideal DRAM timing)


    The RCP almost certainly handles access much more efficiently though, and graphics accesses also tend to use long bursts fairly often.

    Still, though, that also implies the N64's CPU has only a fraction of the nominal main memory bandwidth of the PSX or Saturn.
    Last edited by kool kitty89; 10-04-2013 at 09:40 PM.
    6 days older than SEGA Genesis
    -------------
    Quote Originally Posted by evilevoix View Post
    Dude it’s the bios that marries the 16 bit and the 8 bit that makes it 24 bit. If SNK released their double speed bios revision SNK would have had the world’s first 48 bit machine, IDK how you keep ignoring this.
    Quote Originally Posted by evilevoix View Post
    the PCE, that system has no extra silicone for music, how many resources are used to make music and it has less sprites than the MD on screen at once but a larger sprite area?

  6. #831
    Wildside Expert
    Join Date
    May 2011
    Posts
    181
    Rep Power
    11

    Default

    Quote Originally Posted by kool kitty89 View Post
    Huh, I wonder why the polygon rates were so low then. Assuming similar per-cycle performance (per integer unit), the RSP should have close to 8x the vertex throughput (~7.5x), so the 500~750k polygon figure seems kind of low for that. It's 500 MOPS vs ~66.8 MOPS on the GTE. (so 8 million 16-bit vertices per second, and 2.67 million 3-point polygons, by the same metric that the PSX's ~66.8 MOPS GTE does ~1.08M V/s or 360k polys/s, or the Dreamcast's SH4 could so 1.4 GFLOPS for ~8 million polygons/s)
    There are more overheads involved than just counting the multiply-adds for the transform on the N64.

    Quote Originally Posted by kool kitty89 View Post
    And on another note, the Saturn's SCU uses 32-bit fixed-point math, doesn't it? SH2s could do either (16-bit is 1.5-2x faster though). Either route is still vastly slower than
    the GTE though. Actually, for doing 16-bit vertex computations, the SVP's SSP-1601 would be better at full speed, albeit at 25 MHz rated speed, 26.6/28.6 MHz would be pushing it. (not that there weren't likely other low-cost 16/32-bit DSPs that could have been employed -too bad the SH DSP wasn't avilable yet, 1 SH2 and 1 SH2 DSP would have been pretty nice)
    The SSP is too limited, no divide for example.



    Quote Originally Posted by kool kitty89 View Post
    The RCP almost certainly handles access much more efficiently though, and graphics accesses also tend to use long bursts fairly often.

    Still, though, that also implies the N64's CPU has only a fraction of the nominal main memory bandwidth of the PSX or Saturn.
    Not really - CPU access uses the cache in most cases.

  7. #832
    Hero of Algol kool kitty89's Avatar
    Join Date
    Mar 2009
    Location
    San Jose, CA
    Age
    30
    Posts
    9,725
    Rep Power
    64

    Default

    Quote Originally Posted by Crazyace View Post
    The SSP is too limited, no divide for example.
    Software divide is 32 cycles pm SSP-1601 iirc, same as the SCU in the Saturn. (clock for clock, the SSP seems similar to the SCU's DSP, but the latter)

    Anyway, I was remembering the SH2 specs wrong anyway. Peak throughput for 16-bit multiplication is 1 cycle (with 2 cycle latency), so it should be around as fast as the SSP for that anyway (not as cheap though), and division is in hardware and somewhat faster per-clock as well (more so given the clock speed discrepancy of the SCU), not sure of the cycle times for SH2 division though.
    6 days older than SEGA Genesis
    -------------
    Quote Originally Posted by evilevoix View Post
    Dude it’s the bios that marries the 16 bit and the 8 bit that makes it 24 bit. If SNK released their double speed bios revision SNK would have had the world’s first 48 bit machine, IDK how you keep ignoring this.
    Quote Originally Posted by evilevoix View Post
    the PCE, that system has no extra silicone for music, how many resources are used to make music and it has less sprites than the MD on screen at once but a larger sprite area?

  8. #833
    Mastering your Systems Shining Hero TmEE's Avatar
    Join Date
    Oct 2007
    Location
    Estonia, Rapla City
    Age
    30
    Posts
    10,095
    Rep Power
    110

    Default

    Mul and shift is only way to divide on all the cheap DSPs. Those DSPs can all mul is one cycle, and get free shifts on most instructions.

    10 / 2 is same as 10 * 0.5
    Death To MP3, :3
    Mida sa loed ? Nagunii aru ei saa "Gnirts test is a shit" New and growing website of total jawusumness !
    If any of my images in my posts no longer work you can find them in "FileDen Dump" on my site ^

  9. #834
    Hero of Algol kool kitty89's Avatar
    Join Date
    Mar 2009
    Location
    San Jose, CA
    Age
    30
    Posts
    9,725
    Rep Power
    64

    Default

    Quote Originally Posted by TmEE View Post
    Mul and shift is only way to divide on all the cheap DSPs. Those DSPs can all mul is one cycle, and get free shifts on most instructions.

    10 / 2 is same as 10 * 0.5
    Still a lot better than having no hardware multiplier either (let alone a fast mutilplier like we have in these cases) . . . like with most 8-bit CPUs, or just really slow hardware (or microcoded) multiply on some early 16-bit CPUs. (well, even later 16/32-bit CPUs -ie mid/late 80s- tended to be fairly slow for multiplies . . . x86 kind of stopped pushing that in favor of faster FPU performance too -486 went in that direction and Pentium much more so, granted, MMX allowed for very fast integer arithmetic -except by that point, most/all 3D geometry engines on PC had moved on to floating point operations exclusively, even if MMX could technically do it faster -would have been interesting if the original Pentium had focused on that from the start and less on the FPU)

    On another note, I seem to recall a reference to the Super FX actually having hardware divide, that and the hardware multiply taking several cycles. I forget the exact figures, but I think it was 2 cycles for 8-bit multiply, 4 for 16-bit multiply, and 16 for 16-bit divide. (definitely another area Super FX is much different from the Flare 1 DSP Ben Cheese had designed earlier, with a more typical 1-cycle 16x16 multiply -I remember there being a reference to that using a 36-bit product rather than 32-bit too, or more likely as an intermediate for multiply+accumulate operations)
    Then again, Super FX isn't a traditional DSP per-se, but a really primitive 16-bit RISC CPU with DSP-like functionality.
    Last edited by kool kitty89; 10-05-2013 at 10:16 PM.
    6 days older than SEGA Genesis
    -------------
    Quote Originally Posted by evilevoix View Post
    Dude it’s the bios that marries the 16 bit and the 8 bit that makes it 24 bit. If SNK released their double speed bios revision SNK would have had the world’s first 48 bit machine, IDK how you keep ignoring this.
    Quote Originally Posted by evilevoix View Post
    the PCE, that system has no extra silicone for music, how many resources are used to make music and it has less sprites than the MD on screen at once but a larger sprite area?

  10. #835
    Raging in the Streets Yharnamresident's Avatar
    Join Date
    May 2013
    Location
    British Columbia
    Posts
    4,080
    Rep Power
    65

    Default

    So I hear the Saturn's board was very complex, and it couldn't be revised with a smaller cheaper model. Like the original Xbox.

    Do you guys agree with this? maybe they could've cut the cartridge slot, and had the RAM upgrade built in.
    Certified F-Zero GX fanboy

  11. #836
    I remain nonsequitur Shining Hero sheath's Avatar
    Join Date
    Jul 2010
    Location
    Texas
    Age
    42
    Posts
    13,313
    Rep Power
    131

    Default

    Everything can be shrunk and simplified over time. Any processor can be combined with another processor on the same die. It is just a matter of whether the production run lasts long enough to merit the added R&D cost of redesigning the hardware.
    "... If Sony reduced the price of the Playstation, Sega would have to follow suit in order to stay competitive, but Saturn's high manufacturing cost would then translate into huge losses for the company." p170 Revolutionaries at Sony.

    "We ... put Sega out of the hardware business ..." Peter Dille senior vice president of marketing at Sony Computer Entertainment

  12. #837
    Master of Shinobi
    Join Date
    Sep 2012
    Posts
    1,284
    Rep Power
    38

    Default

    Quote Originally Posted by azonicrider View Post
    So I hear the Saturn's board was very complex, and it couldn't be revised with a smaller cheaper model. Like the original Xbox.

    Do you guys agree with this? maybe they could've cut the cartridge slot, and had the RAM upgrade built in.
    It couldn't be shrunk down because Sega stopped giving a shit about the Saturn and went looking for replacements as early as a few months after the Saturn came out.

    In theory, if they remove the MPEG cart slot, a lot of the components could've been simplified and made cheaper. By late 1997 they also combined a few other parts, so they definitely could've made a smaller form factor Saturn.

  13. #838
    Raging in the Streets Yharnamresident's Avatar
    Join Date
    May 2013
    Location
    British Columbia
    Posts
    4,080
    Rep Power
    65

    Default

    Quote Originally Posted by zyrobs View Post
    It couldn't be shrunk down because Sega stopped giving a shit about the Saturn and went looking for replacements as early as a few months after the Saturn came out.

    In theory, if they remove the MPEG cart slot, a lot of the components could've been simplified and made cheaper. By late 1997 they also combined a few other parts, so they definitely could've made a smaller form factor Saturn.
    Yea its not like Sega was known for staying %100 behind their hardware.
    Certified F-Zero GX fanboy

  14. #839
    Hero of Algol TrekkiesUnite118's Avatar
    Join Date
    May 2010
    Age
    32
    Posts
    8,217
    Rep Power
    128

    Default

    Quote Originally Posted by azonicrider View Post
    Yea its not like Sega was known for staying %100 behind their hardware.
    They stuck with the Master System in Europe and Brazil for quite a long time, and the Genesis world wide for quite some time, even the Sega CD got a rather healthy lifespan. An in Japan I'd say the Saturn and an ok run, yeah it could have gone longer but it was in the realm of being acceptable. Really Sega only got that reputation after the 32X and US Saturn.

  15. #840
    Master of Shinobi
    Join Date
    Sep 2012
    Posts
    1,284
    Rep Power
    38

    Default

    Those were Sega of Europe and Sega of America.
    The parent company wanted their entire lineup axed globally and focus on the Saturn, since in Japan they had awful sales.

Thread Information

Users Browsing this Thread

There are currently 1 users browsing this thread. (0 members and 1 guests)

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •