Sounds like there may be routing trouble between your ISP and the Linode datacenter (unless you're getting a _lot_ of reports from users). During the downtime, get a traceroute to your server and report it in a ticket. You might also want to try requesting a migration to another datacenter, where you might have better luck.
The instance always runs at market rate. You set a _maximum_ price but are charged less than that if the market rate is lower. So there's no sense in doing such a reactive system when amazon does it for you.
It's uploading the same log file every 30 seconds to my S3 bucket, so I suspect they're still working out the kinks :)
That said, the documentation on the datafeed stuff is annoyingly sparse - the format is reasonably self-documenting, but I wish they'd commit to it somewhere.
"A transfer of copyright ownership, other than by operation of law, is not valid unless an instrument of conveyance, or a note or memorandum of the transfer, is in writing and signed by the owner of the rights conveyed or such owner's duly authorized agent."
The problem is when the execs start feeling like they will never be caught - or worse, when they start tricking themselves into thinking that what they're doing isn't fraud.
Sure, they'll be caught eventually and punished, but the damage is done.
I'm sure DMA happens between the controller and host - there's no way they could achieve those speeds with PIO. And DMA has nothing to do with pipelining - the controller could easily be programmed with a list of buffers in which to place USB responses.
Actually, akamai hosts their CDN nodes right at ISPs already - though this is mostly to serve customers on that very ISP without paying for backbone bandwidth. So in a way we already have this distributed CDN...
They've _been_ routed like that forever. That's why people have been using them - because it doesn't do any harm (the data gets dropped at the first BGP-aware router). The issue is we can't _start_ using them now.
Yes, the story was a winner of Reader's Digest's "First Person" story award - for unique, _true_ personal experiences. See page 214 of the Nov 1955 edition of Reader's Digest for more details.
> I'm not sure I get the complaint. It's Apple's operating system, it's never been free software, and clearly they're going to exert some control over what apps get installed. That's clearly within Apple's power and rights to do, and it doesn't hurt anyone but competing app vendors who want to use Apple's (!) operating system.
No, it's the user's phone, and clearly _they_ should exert control over what apps get installed. After sale, what gives apple the moral right to tell the owner of the device that they can't install google voice?
On the other hand, the APIs gvoice exposes run on Google's hardware, using their resources, and their trunk lines. Google has every right to limit use of their equipment.
No, they're not dropped, as that would violate attribution requirements. They are, however, compacted and compressed to take less space (at the expense of more CPU to access, but the typical user won't notice this)
It's worth noting that on modern OSes, the OS will forcibly zero out memory when it's allocated from the OS, so if you're allocating a really large sparse set (large enough to not come from existing free heap RAM), and don't intend to clear it a lot, sparse sets won't save any time, and calloc (which can skip the zeroing if it's getting new pages from the OS) will probably be a better idea.
The point is with centralized systems, only committers can do things such as branching or preparing commits. With a DVCS, everyone gets the same tools, so the rights granted by having the patch accepted are simply having it merged, nothing more or less.
The specification states that "your routine should modify the given zero-terminated string in place". This implies that a zero-terminated string is passed in, and that it is mutable. If this is not the case, it is impossible to perform as specified; any behavior is wrong. An assertion error would be the ideal way to handle things; an immediate segfault is almost as good (and probably the practical thing to do if the string is immutable or not null-terminated). And of course, there are some error situations you simply can't detect - eg, not all cases of non-terminated strings are detectable, because you run into and mutate some other heap garbage before hitting an unmapped or immutable page.
Nor does http://www.solipsys.co.uk/new/ColinWright.html?Personal - though I'm not sure that Colin Wright even wrote that, given I've yet to find a link to the article from the root, and the article doesn't have the author's name ...
More likely, you'd be unable to consume the caffeine in a such a diluted form fast enough to exceed the rate at which the liver clears it - it takes time to empty out the stomach of all that water you're consuming along with it. Remember that humans detoxify caffeine much faster than most other mammals, too.
That said, keep in mind that if you need to keep at least one instance up, you need to put them in multiple availability zones, or vary the max spot prices - if there's a load spike that leads to your spot instances being terminated, it'll likely hit all of the spot instances at a given price in the same AZ.
You could put data on there, sure, but typically the OS won't try to execute garbage data left over from a previous boot - it'll just get overwritten. I suppose you could have it swap out data after boot, but there are easier ways when you have physical access.
Only a copyright holder can bring legal action in this case. If you're just a customer, you can't bring action by yourself, as you're not a party to the violation or license agreement between HTC and the copyright holder(s).
There's only so much resources on each physical host - when you hit the resource usage of an additional Linode, you've effectively used up the space that could have gone to another customer. So you have to pay their rent as well.
Although this really doesn't make that much sense for bandwidth... but when you look at the raw price for bandwidth it's not all that bad anyway.
Linode doesn't use SSDs (yet) - the founder has been spotted on their IRC channel in the relatively recent past mentioning the cost/performance isn't where they want it yet.
hdparm only does sequential reads, so don't expect a night-and-day difference between SSDs and rotating disks. Plus that benchmark reads out of the disk (buffer) cache, so you might really testing your memory speed.
Or perhaps they consider the skills needed to pass the test easily to be the bare minimum needed for the job, and anyone who can't pass it quickly isn't qualified (but hey, if you learned something from it, good for you!).
On the contrary - there's a few things they can do. They could examine the difference in results between the two tests at the individual-student level, for example. Or, using the set of students who admit to their cheating as a training set, do a question-level analysis.
Or you can keep a backup linode in a different datacenter. Or better yet, have redundancy with a completely separate provider, so you can deal with a complete failure of either one provider.
If the integers cannot be guaranteed to be distinct, and all of them must be presented at the end, then no algorithm which restricts itself to O(1) memory usage can be correct. Since the problem restricts you to using 4KB of RAM (O(1)), and there is presumably a solution, you can conclude that you need not count non-distinct inputs - but yeah, it's kind of a bad problem description.
That requires the data the government employee has to be byte-for-byte identical to the one already in dropbox, which is unlikely for an unintended leak.
The exploit is specifically designed to only work on a small subset of kernel builds, as indicated in the header:
* In the interest of public safety, this exploit was specifically designed to
* be limited:
*
* * The particular symbols I resolve are not exported on Slackware or Debian
* * Red Hat does not support Econet by default
* * CVE-2010-3849 and CVE-2010-3850 have both been patched by Ubuntu and
* Debian
*
* However, the important issue, CVE-2010-4258, affects everyone, and it would
* be trivial to find an unpatched DoS under KERNEL_DS and write a slightly
* more sophisticated version of this that doesn't have the roadblocks I put in
* to prevent abuse by script kiddies.
A reasonable fallback would be to look at the proportion of commits that touch files under each directory - the more active a directory is, the more likely it is to be of interest. Still probably not worth it in the majority of cases though.
I think the idea is you won't bounce a job from your computer, up to google, then back to your printer - instead, google will generate the job directly, then the printer will download it from google and print.
It's better than having the site be completely inaccessible. You'd only deploy this when you come under load. Additionally, many of the more important robots can be identified reliably (eg, googlebot has a DNS handshake that can positively identify legitimate googlebots) - and even if you get a false positive, if you filter out all their packets, they're likely to assume a temporary failure and come back later.
Perhaps the problem is that it was, in fact, adaptable, meaning that if all the supernodes your client knows about go down you're no longer able to bootstrap.
In fact, I found I was able to reconnect faster after the outage by deleting the shared.xml file that stores known supernodes - presumably connecting back to Skype-operated bootstrap servers.
I believe there are ways to hide portions of the edit history as well - they usually don't bother to do this if it's only part of a larger article, but if necessary, it can be done.