Nintendo used to have a page on emulation on their website incorrectly claiming that it was always illegal and all emulators had solely been created to enable piracy. This new claim is not compatible with their previous action of having that page.
Nintendo used to have a page on emulation on their website incorrectly claiming that it was always illegal and all emulators had solely been created to enable piracy. This new claim is not compatible with their previous action of having that page.
The new law allows you to have more than one charging connector provided that either the USB-C one is the best one, or the USB-C one is as good as the spec allows. If the new connector’s genuinely better, then it’ll beat a maxed-out USB-C connector, so devices will provide it in addition to a maxed-out USB-C connector.
Male to female A-to-A cables are pretty common (they’re just basic extensions) and totally legal under the spec provided they’re limited to a certain length or contain a powered repeater. It’s just the rare male-to-male (which my keyboard stupidly uses) and even rarer female-to-female that aren’t legal. There’s also the exception of USB-on-the-go cables with a micro-B end and a female A end for devices like smartphones that are capable of being host or connecting to a host, back before they switched to USB-C.
It adds the executable permission (without which, things can’t be executed) to all the files in the game’s directory. You only need to be able to execute a few of those files, and there’s a dedicated permission to control what can and can’t be executed for a reason. Windows doesn’t have a direct equivalent, so setting it for everything gives the impression that they’re trying to make it behave like Windows rather than working with the OS.
Selling old games and new games isn’t mutually exclusive, and more money tends to be spent on new games than old ones. It’s not unreasonable to expect that selling new games too could subsidise the work to make old games run on modern platforms.
Like other commenters have said, start by asking the upstream developer (whether that’s by sending a message with a link to the fork or by sending a mega-PR that says you don’t expect it to be merged as-is in the description). They should be the best judge of how they’d prefer to handle it. The thing I’d add is that you should try to avoid taking it personally if their preferred approach isn’t one you think is a good idea. Sometimes good fixes end up never merged because of disagreements becoming too heated even if everyone’s basically on the same page about the fox being good. There’s also a decent chance that your refactors are things the upstream developer explicitly doesn’t want and would otherwise have done them themselves and implemented the same fix, too, or they don’t agree that your fix is good enough. They won’t want to be on the hook for maintaining contributions that use approaches and code style that they don’t like, and that’s okay. They also might know something you don’t about their project that would make something that’s obviously a good idea to you obviously a bad idea to them.
Basically, just try and remember that if it’s a hobby project, it makes progress when the maintainer is having a good time, and gets abandoned when they’re not anymore, so try and avoid making a mess and having arguments when they’re the one that’ll have to deal with any fallout from any mistakes.
You’re not throttling between 0% output and 100% output, as that takes weeks or months, and instead throttling within a limited range at the upper end of the output power. Because a nuclear reactor puts out so much power compared to a combined cycle gas turbine, going down to 80% power has a comparable impact to totally shutting down a gas turbine. It doesn’t need to be instant to be used for dynamic load - throttling a gas turbine isn’t as it takes time for the heat exchanger to warm up or cool down after increasing or decreasing the fuel flow, and time for the first turbine to speed up or slow down after the flow of the Brayton-cycle coolant changes, and then more time for the second heat exchanger to heat up/cool down and more time for the Rankin-cycle turbine to speed up or slow down as the flow of steam changes, and only then is the new desired output power achieved.
Wikipedia puts the average emission time for delayed neutrons at fifteen seconds, which while ludicrously slow compared to a bomb, is really fast compared to the day-night cycle that represents most dynamic load variance in a country with plenty of renewables or heavy industry that doesn’t operate at night time, so there’s plenty of time for the power output to respond as long as you’re restricting the range that it’s operating in.
There are plenty of nevers and almost nevers with this case already, so it’s not unreasonable to worry that there might be more.
It’s likely that my data’s out of date, and that graph does include it. If it didn’t, it’s hard to see how photovoltaics could kill enough people to have such a similar death rate to nuclear if accidents like Chornobyl are included.
The ones in service right now are mostly/all designed that way, but that’s a design decision rather than an inherent limitation. They cost basically the same to run whether they’re at maximum output or minimum, so they’re most cost-effective as base load and if you need responsive output, you can probably build something else for less money. If you ignore that and build one anyway, you only need fast motors on the control rods and the output can be changed as quickly as throttling gas turbines, but there’s no need for that if you know you’re just building for base load.
Lots of people die mining the materials for photovoltaics, even with emerging technologies that reduce rare earth usage, especially because the countries with a lot of rare earth mineral wealth mostly have terrible human rights, slavery and worker safety records. In principle, this could be reduced without technological changes, e.g. by refusing to buy rare earth metals unless they’re extracted in line with best practice and that can be proven (it’s typically cheaper to fake the evidence that your workers are happy, healthy and alive than keep them happy, healthy and alive), but then things get more expensive and photovoltaics are already not the cheapest.
Even when disasters like Chernobyl are included, nuclear energy kills fewer people per Watt than any of the alternatives. E.g. dams burst and people like building towns downstream of hydro plants. Even with wind where it’s basically only deadly due to accidents when installing and repairing turbines (e.g. people falling off, fires breaking out too abruptly to climb down), it happens often enough that it ends up more dangerous than nuclear. Burning gas, coal and biomass all work out much deadlier than renewables and nuclear, but if your risk tolerance doesn’t permit nuclear, it doesn’t permit electricity in any form.
It doesn’t help when all the senior employees from last time you built a reactor have retired and anyone who hasn’t retired was pretty junior the last time around. For projects where you have to get everything right the first time, so can’t just try things to see what works, it’s devastating to stop doing them if you ever might need to start again.
The main lore change people refer to generally seems to be them thinking it’s set decades earlier than it is. Part of the plot of the show is working out why the NCR isn’t the dominant faction anymore, and plenty of characters remember it, and used to live in Shady Sands. The status quo changing years after New Vegas was set doesn’t mean that the events of Fallout 1, 2 and New Vegas didn’t happen.
CEOs are only replaceable because lots of people want to be CEOs. It’s not unreasonable to think that something like this would make them less keen. People don’t normally put themselves into situations where they’ll be legally obligated to do things that might get them shot.
In other threads, people have suggested that he might be carrying the manifesto in case he was shot and killed if/when arrested.
They have to defend their trademark. They don’t have to defend copyright, and most of Nintendo’s reputation comes from copyright claims. Someone streaming a let’s play isn’t selling a counterfeit Mario game, they’re just showing you things in a real Mario game, so there’s no trademark claim.
They’re also big abusers of the fact that most of the people they make copyright claims against can’t afford to defend themselves against such a behemoth. Even if you’re sure you’ve not violated their copyright and your lawyer’s sure, too, it’ll be much cheaper to roll over than get the legal system to agree with you.
The intended use for this kind of product is that you hire a company to break into your company, and then tell you how they did it so that criminals (or, if you’re someone like a defence contractor, foreign spies) can’t do the same thing later. Sometimes they’re also used by journalists to prove that the government or a company isn’t taking necessary precautions or by hobbyists at events where everyone’s aware that everyone else will try to break into their stuff. There’s typically vetting of anyone buying non-hobbyist quantities of anything, and it’s all equipment within theoretical reach of organised crime or state actors, so pentesters need to have access, too, or they can’t reasonably assess the real-world threat that’s posed.
It’s not Kessler Syndrome until it’s so bad that we can’t feasibly launch anything new. A single cascading collision chain might calm down again after breaking a lot of stuff without any catastrophic long term impact if all the debris ends up either in a stable orbit that can be predicted and avoided by other objects, or unstable orbits that decay until the debris falls out of orbit.
You can jam the Windows UI by spawning loads of processes with equivalent or higher priority to
explorer.exe
, which runs the desktop as they’ll compete for CPU time. The same will happen if you do the equivalent under Linux. However if you have one process that does lots of small allocations, under Windows, once the memory and page file are exhausted, eventually an allocation will fail, and if the application’s not set up to handle that, it’ll die and you’ll have free memory again. Doing the same under every desktop Linux distro I’ve tried (which have mostly been Ubuntu-based, so others may handle it better) will just freeze the whole machine. I don’t know the details, but I’d guess it’s that the process gets suspended until its request can be fulfilled, so as long as there’s memory, it gets it eventually, but it never gets told to stop or murdered, so there’s no memory for things like the desktop environment to use.