First of all, I am big fan of this approach and think it will be a winner (e.g. Lorex, March Networks, others).
As for hosting, there's tons of options available. Key questions are: What OS does the server run on? Do you need a dedicated server or can it run on a virtual machine? Do you want to be able to physically access it any point?
So what sort of server DOES it require? If it just needs a solid Linux host to run on with things like PHP, MySQL, Perl, etc., I'd recommend DreamHost - I'm hosting a few sites (mostly for friends) under my single account; pricing and support are excellent. If it requires you have your own physical system, they do offer a dedicated private server option.
I'd start by getting a better idea from the developer on minimal requirements. Some things I'd look for:
- O/S recommended
- Language the service is developed in, along with required modules and/or frameworks
- HTTP server and required modules or components
- Database server used
- Key hardware resources (eg: it's processor intensive, or will cache everything in RAM, or need lots of storage space)
- Bandwidth used and connection model (eg: is it a simple forwarder that basically makes connections, or does the server stay in the data path, which will use a lot more bandwidth).
With the above you can probably get a better idea of being able to use a shared server, an elastic platform (AWS, et al), or a dedicated server. Once you can better narrow it down to a specific category of hosting, it's easier to find the best option based on the details.
Here is the info they gave me:
Cloud Server Setup Details
CPU: 4 core or more
memory: 8G or more
HDD: 100G or more
Net Speed: 10TB/month or more
operation system: linux centos 5.4
Note: Only our company take charge of cloud server, include root password and maintenance, customer only use cloud, no need to operate the server.
Here is the step for cloud setup:
1. give cloud server IP address, root,password, and specification to us
2. we test cloud server performance
3. give us translated language character(Since your language is English, no need to translate.)
4. give web design
5. we install cloud software and finish it within 2weeks.
Thanks for sharing the specs.
Do they really need 8GBs of RAM? That's going to increase the price significantly. From what I have seen that will be a few hundred dollars a month. As a point of comparison, the IPVM web server, uses 2GB and costs ~$100 per month.
I'd go with a virtual server. Any of the big providers can dynamically segment the size you are looking for. (We use Rackspace but there are many alternatives).
I would not recommend your own dedicated physical box. It's just not needed for what you are doing and will make things more complicated if anything goes wrong with your box or you need to expand. Virtual servers, by contrast, can be managed by a click of the button on your web admin interface.
Probably, the main consideration will be physical proximity of the server to your customer. This should not be a big deal and it does not need to be physically right next to your customers but if most of them are in the Midwest, it would be better to host in Texas than say the UK.
Btw, I second Brian's question/point about checking how they do their connection as that will have a big impact on performance.
If I had to guess, the only time major traffic is sent over to the server from the DVR or NVR is when the client specifically requests which particular camera he/she wants to view. I assume that a small amount of outgoing traffic is sent from the DVR to the server to stay "connected" at all times, but I assume its small. If not, then that would kill the upload bandwidth at the DVR location. I'll have to make sure though. If it is constant full blown traffic, then I dont want to do it.
I was under the assumption that a physical box would be needed for something like this, but if I could a virtual server, that would be great.
Given those specs, if they are correct, I would question the efficiency of what is being created there.
It sounds like all connections are going to be permanently routed, in essence, through the sever? If so you should *really* have multiple geo-located servers and do geo-based DNS...
Personally, I would NOT recommend virtual servers for an application like this (again, assuming it is resource intensive). You'll be forever fighting phantom performance issues that you're not sure if they're related to your stack, or some other site hosted on the same hardware as you.
I have a couple of fairly beefy dedicated servers that are right around $100/mo., with no real limits. Yes, I'm responsibile for their care and feeding, but I also have 100% of them dedicated to me.
I suspect the reason they specify such a beefy machine is just so there AREN'T later performance issues, in case your service really grows. It may be overkill for a dozen or so customers, but you get a few dozen sites using it, it could ramp up fast. Starting with a lot of horsepower right off avoids the need to upgrade bits and pieces anytime soon.
Might not be the ideal business model, but it makes sense for the manufacturer to recommend that - it lessens the chance of them getting calls from their vendors complaining the their system doesn't work very well.
Also, based on their "requirements", I don't know if they would willingly support virtual servers. Because there's no webserver or backend components specificied, just a Linux platform, I would expect they want to remote in and install their own webserver and backend... PROBABLY something very common like Apache, PHP and MySQL... but at the same time, if they control all of it, that ensures no version mis-matches, and again, avoids a lot of potential issues from things not under their control (trust me, I've seen some very bizarre issues caused by only SLIGHTLY incorrect PHP builds, or code that requires specific PHP options be enabled and precisely configured).
Again, starting from the cleanest slate possible means less headaches for the manufacturer down the line - they set it all up, make sure it's working to spec, and theoretically, the customer will never need to call them for support.
With virtual machines / 'slices / instances, you get full root access and complete control over all software on the 'box'. It's 'just a Linux platform' and you can install whatever you want to your heart's desire.
Seriously? Is this 2003? :)
I can't understand why it would make sense to have your own physical machine unless you had a very big infrastructure, your own internal people working full time on this, etc.
And in terms of 'avoiding the need to upgrade bits and pieces anytime soon,' that's what virtual servers do. Within 5 or 10 minutes, the machine can be upgraded or downgraded from 1GB to 30B of RAM, etc. (example).
"I can't understand why it would make sense to have your own physical machine unless you had a very big infrastructure, your own internal people working full time on this, etc."
It's FAR more common than you thin for anything where reliable performance under loading is required.
Years ago, yes. Today, no.
I'd be more concerned about the reliability of one's own physical box vs a virtual solution. If there's a problem with machine your virtual server is running on (whatever the cause), the provider can simply move you to a new box, transparent to you. If it's your own physical machine, you need to fix it. Does Sean (or his team) have the expertise to maintain this? Do they have interest? The time? The money? etc.
Lots of even big web apps run virtual. Why has Heroku gotten so big? Even with their crazy dyno issues, overwhelmingly those apps are striking on that platform or a similar one.
"Does Sean (or his team) have the expertise to maintain this? Do they have interest? The time? The money? etc."
That's why you let the host deal with all that... for example.
Brian's right - for any virtual server, there has to be some hardware somewhere behind it, and you're sharing that with anyone else who's buying space and cycles on that server.
Matt, let's not munge 'shared' hosting with 'virtual private servers' (here's a comparison of the two). From a physical perspective, the key difference with virtual is that the 'server' is not tied to a single machine and can be moved around to re-allocate resources, etc. When you pay for 1GB, 2GB, 4GB from reputable service providers, you're pretty much guaranteed to get it (regardless of what physical machine it is running on).
As for VPS vs dedicated, here's a good comparison of those two. For the same given resources, you will pay more for dedicated and have less flexibility to upgrade or make physical changes.
The risk of 'sharing' is not a real risk with virtual private servers. The risk of overpaying or, worse, underpec'ing the machine is far more significant for Sean, given that he's just getting started with this and the vendor has questionably high hardware specifications for what should be little more than a traffic cop.
Btw, compare to the technical requirements for Axis AVHS LAMP server - single core Xeon, 4GB RAM. That's way less than what Sean's provider is asking for.
"Years ago, yes. Today, no. "
No, today, every day. This is something I'm still involved with on a daily basis.
There is no more to maintaining a dedicated server than a VPS, in both cases you're still remote, and the data center is still the "hot hands" for any kicks-in-the-ass needed to the hardware itself.
VPS is great (and popular) for all the low budget, simple things. But it's not considered the best choice when your application on the server is resource-bound in some way and you need to be able to guarantee performance, all the time, every time.
For Sean's thing, he *should* be able to get by with a VPS, if the application is coded right. My worry is that he's having this built by someone who doesn't *really* know how to code a scalable application and it's going to be a nightmare of issues.
John, what's the big deal with 8GB of RAM in a server? My home computer has 12GB and the cost was peanuts. Newegg has Kingston 4GB 240-Pin DDR2 FB-DIMM ECC Fully Buffered DDR2 667 (PC2 5300) Server Memory for $135 and others at substantially less.
My question on cost would regard the 10TB/month Net Speed. Using Cox Business as a guide, you'd probably pay $500-$600 a month for suitable service. That would give you 25Mbps download and 4Mbps upload to 50Mbps download and 5Mbps upload. Although I can't find a list of bandwidth caps for Cox Business, their equivalent home services have 300GB to 400GB caps.
I would guess that the cost to up that to 10TB would be substantial.
The cost of the internet is the main reason why we want to host it somewhere else.
Carl, using 8GB of RAM in a Linux server is a lot. It's not like a Windows PC where you have ten client applications running simultaneously, etc. It's hard to imagine they really need 8GB of RAM on a Linux server unless they are serving huge numbers of simultaneous connections.
As for 10TB per month, how many connections / users / visitors does that cover? What's the estimated traffic load per average connection?
10TB is a _ton_ of bandwidth. Either they are expecting an incredible amount of connections or they are routing video through this box. If it's the later, it's a risky design.
Before deploying anything, understanding how this system works and what it needs per visitor/connection/user is critical.
IPVMU Certified | 05/22/13 04:11pm
Hello Sean, my recommendation would be to go the Amazon Web Services route. You can size your server instance based on its load and as demand grows, you can reboot or swap it with an instance that has more resources. You can run instances in differant global regions to provide lower latency geographically. Alternatively, if a region goes down, you can use their DNS service to redirect to a differant region. They have a free tier that is great for testing and develpment.
The performance is there as well, you can get a server as small as 1 CPU w/0.6GB of memory and up to 88CPUs w/244GB of memory. You can set storage IOP performance as high as 10,000 and even stripe multiple of those for faster performance.
I'm currently migrating my companies VMWare vSphere environment in a colo to AWS. The yearly cost of running the same environment with the same performance in AWS is only 27% of what we're curerntly paying for the colo. However, we're going to have the addition of a passive site in another region that we can fail over to in the event of an outage which brings our yearly costs to 36% of the yearly colo costs. This applies to our appliaction, others may vary greatly.
Thanks. I have looked into Amazon as well. Seems great. Its amazing, Amazon and Google will soon be taking over the world. They have everything!
I will let you guys know if/when I get this thing up and running.
The other question with a virtual host is whether your manufacturer will even set up their system on one, or if they'll insist on a dedicated server. IN THEORY, they should never be able to tell the difference, but still... if they INSIST on a dedicated box, it kind of makes the other arguments moot.
Sounds similar to 3xLOGIC's 3xCLOUD service - essentially a web-based client application for Vigil systems; you login to the site, then set up your DVR/NVR sites in that the way you would in a thick client. I've played with it a bit but find the performance pretty lacking - by the time the site pulls video from the DVR, then re-displays it for you, there are some significant delays. Speed seems to vary all over the place, which is probably due to varying usage eating available bandwidth.
We are moving IPVM to a new (virtual) server today. Here are some technical details.