Fake Firefox warnings lead to scareware

Nuclear Firefox logoPurveyors of fake security software don't let much grass grow under their feet and continually make improvements to their social engineering lures.

While most of the talk for the past month has been their move to Mac with fake Finder pop-ups that appear to scan your computer, they haven't stopped innovating on Windows either.

Their latest scam? They detect your user-agent string from your web browser and display a fake Firefox security alert if you are using the Mozilla Firefox web browser.

Fake Firefox security alert

Internet Explorer users get the standard "My Computer" dialog that appears to do a system scan inside their browser window.

Taking advantage of detailed information about the person's computer and software allows for a much more specific, believable social engineering attempt.

We are likely to continue to see these criminals targeting each operating system, browser and any other details that can be gleaned from HTTP requests sent from our devices.

If you click the "Start Protection" button you will download the latest, greatest fake anti-virus program which will perform exactly the way you would expect a fake anti-virus program to.

It will faithfully detect fake viruses on your computer until you register it for $80 or more.

If you are a Firefox user and see a warning about viruses on your computer, you will know it is fake. Firefox does not include a virus scanner inside of it and it will only warn you about visiting malicious pages.

If you get a warning about a dangerous website from Firefox you can always play it safe... Close the browser.

Nuclear Firefox image credit: iPholio on DeviantArt


What Google and Mastercard’s new mobile payment system could mean for iOS users

Last week saw a major new product announcement from Google: the new "Google Wallet" will allow people with compatible mobile phones to use them to pay for goods and services in shops with a simple wave of their hand. This follows a number of in/out/in/out/shake it all about rumors that this "NFC" stuff might be included in the next iPhone. So what is NFC and why should you care? Sit back, grab a cup of coffee, and I'll explain.

Continue reading What Google and Mastercard's new mobile payment system could mean for iOS users

What Google and Mastercard's new mobile payment system could mean for iOS users originally appeared on TUAW on Mon, 30 May 2011 11:00:00 EST. Please see our terms for use of feeds.

Source | Permalink | Email this | Comments

vSphere Home Lab – Intel Desktop Board DQ67SW supports 32 GB

When you’re building a new home lab or adding a new white box to your private cloud, you want to put as much memory into your servers as you can. It’s the same story as in most of the datacenters, CPUs are idling while your ESX servers are running out of memory. Intel has released a new chipset that might be worth a closer look. The Intel Desktop Board DQ67SW supports four 240-pin DDR3 SDRAM Dual Inline Memory Module (DIMM) sockets which can hold 8 GB each. That means 32 GBs of system memory based on  DDR3 1333 or 1066 MHz DIMMs on one board.
 
Based on the latest Intel® Q67 Express Chipset with Intel® vPro™ technology, Intel® Desktop Board DQ67SW is designed to showcase the superior performance quality of the 2nd generation Intel® Core™ vPro™ processor family, enhance office productivity, and lower the total cost of ownership for your business PCs with the newest Intel® Active Management Technology (Intel® AMT) 7.0.


Designed with exceptional stability and compatibility, Intel Desktop Board DQ67SW equipped with the latest SuperSpeed USB 3.0 and SATA 6 Gb/s ports with RAID support. Dual independent display capability with DisplayPort and dual DVI ports.

Implementing Fibre Channel SANs in a virtual server environment

SearchVirtualStorage.com

F

Fibre Channel (FC) SANs are a popular choice for virtual server environments. They offer good performance and security, and since many people already have Fibre Channel SANs implemented in their environment, they often stick with the same technology for the virtual environments. However, Fibre Channel SANs are not right for everyone’s virtual server platforms. They are expensive, and also require experienced administrators to implement them.

In this podcast interview, Eric Siebert, a VMware expert and author of two books on virtualization, discusses FC SANs for virtual server environments. Learn about the advantages and disadvantages of implementing FC SANs to support your virtual server platform, what steps to take to set up a Fibre Channel SAN correctly and what requirements you should know before you choose a Fibre Channel SAN. Read his transcript below or download the MP3 recording.

Listen to the Fibre Channel SANs Q&A.

Fibre Channel is a popular choice when it comes to virtual server environments–What are the benefits of using Fibre Channel SANS to support a virtual server platform?

Typically Fibre Channel is one of the best performing and most secure of the storage technologies that are available today, so people go for that because they want the performance and requirements that they really need and the most possible I/Os they can get . Fibre Channel has traditionally always been what they would implement to get that performance and security standpoint. Fibre Channel networks are all hopefully isolated and kept on their own separate environments, so they are more secure than traditional and other types of networks like iSCSI.

Fibre Channel is also commonly deployed in the enterprise storage architecture, so a lot of people have Fibre Channel SANs already that they can leverage. So in a lot of cases, rather than having to implement something from scratch, they already have a Fibre Channel SAN available that they could maybe expand on.

Fibre Channel storage also has a block-level storage that can be used on other storage devices like NFS or file-level storage. And if you wanted to other things like boot from SAN, Fibre Channel SANs already have those features, and they’re starting to become available in some of the other technologies as well, such as iSCSI, where you can boot from SAN and have a headless server. So you don’t have to put disks in the server; it can just boot directly from the SAN and run from a VM environment.

What disadvantages or complications can you run into with FC SANs for a virtual server environment?

There are two big ones: cost and complexity. Typically Fibre Channel SANs are the most expensive option available, so if you’re looking to implement one from scratch it’s really expensive because you have to buy all these expensive different components that are made specifically for Fibre Channel: Fibre Channel cables, Fibre Channel adapters that you have to put in your servers, switches and backend SAN storage, so the cost would be pretty high. The reason it is so high is because of the performance – the cost is relative to that – you’re going to have to pay a lot more money to get that performance, so cost is one of the major disadvantages of that.

In terms of complexity, Fibre Channel SANs typically require specialized skill sets where your average server administrator wouldn’t have those skills to be able to administer it properly. Typically you need a specialized skill, and you need an administrator who really understands the technology and can implement it properly.

How do you set up a FC storage device to support virtual servers? What steps do you need to do to ensure everything runs smoothly?

It’s pretty straight forward. You just have to be aware of things like speeds. There are different types of speeds for Fibre Channel, but typically today the most common one to use is 4 GB. The newer ones are 8 GB, which provide double the speed. You don’t want to mix components; you want to have the disk that’s going to run at the lowest speed. So if you have a 2 GB or 1 GB card as your server, your switches are 4 GB or 8 GB and so is your backend storage, plus it’s only going to operate on the lowest common speed denominator, so you want to make sure that you have all your components at the same speed to get the most value and speed out of your environment.

Also consider things like multipathing. You want to make sure you have redundancies so typically you would implement a multipath to a SAN, so it’s all the way from the server where you would have two Fibre Channel adapters inside the server, switches, and then typically there are two controllers on the Fibre Channel SAN, so if any one component breaks you should always open a path to get to from the server to the SAN. If you implement multi-pathing and also using other technologies like active-active where both paths are active, you can get more performance out of it that way. That’s always a good idea to keep in mind when you’re implementing a Fibre Channel SAN.

Overall, setting up a Fibre Channel SAN can be complicated, so typically you should make sure everything is set up properly and is working properly. Proper preparation is key for proper configuration. And work closely with your SAN and storage administrators; make sure they understand your needs.

Are there any requirements you need to know about before choosing an FC SAN? Does this storage option require more experienced administrators?

Yes requirements are the key here. You need to know your requirements. If you have applications that need a specific amount of I/O, you need to know what that is to be able to size your SAN properly. You shouldn’t go in there blind and assume that just because Fibre Channel SAN is going to be fast that it will work for you. You really need to go out and do the homework. Do an assessment of your environment, figure out what your I/O requirements are, if you have any type of redundancy requirements, and then figure out capacity as well. You need to know how much storage you need to buy for that Fibre Channel SAN to see if it has the capacity for all of your virtual machines.

Other requirements to consider are things like the features you need. There might be certain features you might want to implement on that SAN such as snapshot capabilities and replication. Also, if you have a disaster recovery strategy, you want to make sure you include your needs for that because a lot of time you can be leveraging storage features to move from your main site to an alternate DR site. So you really need to assess your requirements and what kind of features and functionalities you’re going to need before you go out and buy one.

Also there are some new features for VMware like the vStorage APIs – there are certain feature sets that they have that a lot of SANs are supporting now. The vStorage APIs for array integration will offload some of the storage paths that traditionally some of the hypervisors would do to the storage layer to make it a lot quicker, and so you can take a load off of your virtualization and put it on the storage infrastructure and that will improve the performance of your virtual host as well as the task on the storage device where it should be, and that makes it a lot more efficient.

So basically you want to look around, evaluate, ask questions, get references, get your requirements and go from there and make a decision and hopefully you can find the right Fibre Channel solution that meets your needs.

22 Apr 2011

 

via Implementing Fibre Channel SANs in a virtual server environment.

IO Turbine addresses IO performance for VMware virtualized servers

Sonia R. Lelii, Senior News Writer

Published: 24 May 2011

Startup IO Turbine Inc. today unveiled its Accelio software designed to help mitigate IO performance latency problems in VMware environments by offloading IOPS from primary storage to flash.

Accelio, still in beta, works with virtual machines that use locally attached solid-state storage or flash. Accelio installs on VMware servers, identifies the highest-priority data and offloads IOPS from primary storage to flash. IO Turbine claims Accelio will increase application performance and throughput at each virtual machine and remove IO bottlenecks in VMware environments without requiring more spindles or SSDs.

“We are trying to get the flash as close to the IO request as we can,” said Bruce Clarke, IO Turbine’s vice president of technical marketing and support. “We are trying to get the IO to not ever leave the physical host. If we can do that, you get much greater performance. So technically we are moving the flash into the guest.”

Clarke said enterprise storage systems that use automated tiering with data migration to and from solid-state storage add latency to the storage array. Instead, he said, Accelio automatically directs IO requests away from primary storage to flash for the most frequently accessed data.

Clarke said IO Turbine is trying to solve problems created by a proliferation of virtual machines. With physical servers, LUNs are designated directly to one server so IO requests are consistent in workload requirements. But now servers can be carved up into multiple virtual machines running multiple applications with varying IO workload requests that are being sent to a shared storage infrastructure.

“It’s a much more jumbled workload than primary storage has ever seen before,” Clarke said.

He said Accelio reduces latency because it sits in the guest OS making the requests, and redirects the requests to flash that is also in the host. Only the first request needs to be sent to the external storage system. An Accelio VLUN driver also resides in the VM kernel within the VMware ESX host to help the flash accelerate the IO requests, said Clarke.

“We place cache between the DRAM and disk,” IO Turbine CEO Rich Boberg said. “We use flash or a faster local storage before you go to memory; that way the flash is very close to the application.”

Currently, Accelio supports only VMware, but the company said there are plans to support Microsoft Hyper-V and Citrix XenServer in the future. The product also supports only Windows 2008 environments but any flash format. It also works with physical servers.

Managed IT service provider BC Networks of San Jose, Calif., is beta-testing Accelio. CEO Dave Brewer said Accelio has improved application performance at least 30% by caching IO.

BC Networks tested Accelio by installing several applications it uses in production inside of a virtual machine running Windows 2008 R2. Brewer’s team created two clones with the virtual machine for a total of three instances running VMware ESX. The first instance ran on a Hewlett-Packard ProLiant DL380 with eight 300 GB drives. The second instance ran on a Lenovo server with a single 2 TB SATA drive as the Accelio cache, and the third instance ran on an identical Lenovo server with two SSD drives in a RAID 0 configuration.

“What we found was that the second configuration loaded applications 30 percent faster than the first configuration,” Brewer said. “We also found the second configuration loaded applications at about the same speed as the third configuration. When we disabled Accelio on the second configuration, we found that this virtual machine performed poorly since it was handicapped with only one slow hard disk drive, almost 30 percent slower than the first configuration.”

Brewer said BC Networks has not yet tested Accelio on shared storage.

“IO Turbine is providing IO cache for data to improve performance of data on the server side instead of requesting it from the SAN,” said Chris Wolf, Gartner’s research vice president for IT professional services. “They are putting software close to the hypervisor and that gives exceptional storage IO.”

IO Turbine secured a $7.75 million Series A funding round in April led by Lightspeed Venture Partners and Merus Capital, with angel investors including Sun founder Andy Bechtolsheim. The founders are NetApp veteran Boberg and Chief Technology Officer Vikram Joshi, formerly of Sun and Oracle.

via IO Turbine addresses IO performance for VMware virtualized servers.