QUOTE
Kobra's Antivirus SHOWDOWN results.
kobra's 6-14-04 AV Test.
Testbed consisted of 321 Viruses, Trojans and Worms, all for the Windows32 environment, and all reasonably new samples. I don't have any data on whether some of these are zoo, or ITW, but they are all real threats I feel someone is likely to encounter, since I got them off the internet (and i've verified they are real as each sample must be detected by at least 4 AV's for me to consider it). All scanners were installed on a clean system, without any traces of other anti-virus softwares - between each test the system and directories were cleaned, and the registry was sweeped. Each AV product was treated with a double-reboot, one before, and one after installation. Each scanner was set at its highest possible settings, and was triple checked for proper options and configuration. Most products were the full registered version when possible, others were fully functional unrestricted trials. All products were tested with the current version as of 6-14-04, and the latest definitions for that date. Each product was run through the test set a minimum of 3 times to establish proper settings and reliability, the only product to exhibit some variance on this was F-Secure, which had one scan come up less than the other two without any settings changes indicating a possible stability issue.
The final standings:
1) eXtendia AVK
2) McAfee VirusScan 8.0
3) F-Secure
4) Kaspersky 5.0
5) GData AVK
6) RAV + Norton (2 way tie)
7) Dr.Web
CommandAV + F-Prot + BitDefender (3 Way Tie)
9) ETrust
10) Trend
11) Panda
12) Avast! Pro
13) KingSoft
14) NOD32
15) AVG Pro
16) AntiVIR
17) ClamWIN
18) UNA
19) Norman
20) Solo
21) Proland
22) Sophos
23) Hauri
24) CAT Quickheal
25) Ikarus
Heuristics seemed to play some of a roll in this test, as no AV had every virus in my test in their definitions, and products with stronger heuristics were able to hold their position towards the top of the test. Double/Multi engined products put up strong showings as well, proving to me that the redundacy method works, and I think more AV companies should considering double-engines. The strongest heurisitical AV I noticed was F-Prot/Command, picking up only 247 samples with definitions but they were able to power through 67 additional hits on "Possible Virus" indicators - very strong! Norton with BloodHound activated had 30 Heuristical pickups, and DrWeb rounded up the pack with 20 heuristical pickups. eXtendia AVK grabs the number one slot with double engine scanning, anything the KAV engine missed, the RAV engine picked up with great redundancy on the double engine/definition system. McAfee actually missed only 2 samples with its definitions, but picked those 2 up as "Suspicious File", and therefore, scores nearly perfect as well.
The biggest dissapointments for me were Norman and Nod32. Even with Advanced-Heuristics enabled, NOD32 failed to pick up a large portion of the samples. Norman, while finding some of the toughest samples, managed to completely miss a large portion of them! Showing that their sandbox-emulation system has great potetential, but its far from complete.
Actual test numbers were:
Total Samples/Found Samples (321 total possible) + Number Missed + Detection Percentage
1) eXtendia AVK - 321/321 0 Missed - 100%
2) McAfee VirusScan 8.0 - 319/321 + 2 (2 found as joke programs - heuristically) - 100%
3) F-Secure - 319/321 2 Missed - 99.37%
4) Kaspersky 5.0 - 318/321 3 Missed - 99.06%
5) GData AVK - 317/321 4 Missed - 98.75%
6) RAV + Norton (2 way tie) - 315/321 6 Missed - 98.13%
7) Dr.Web - 310/321 11 Missed - 96.57%
CommandAV + F-Prot + BitDefender (3 Way Tie) - 309/321 12 Missed - 96.26%
9) ETrust - 301/321 20 Missed - 93.76%
10) Trend - 300/321 21 Missed - 93.45%
11) Panda - 298/321 23 Missed - 92.83%
12) Avast! Pro - 292/321 29 Missed - 90.96%
13) KingSoft - 288/321 33 Missed - 89.71%
14) NOD32 - 285/321 36 Missed (results identical with or without advanced heuristics) - 88.78%
15) AVG Pro - 275/321 46 Missed - 85.66%
16) AntiVIR - 268/321 53 Missed - 83.48%
17) ClamWIN - 247/321 74 Missed - 76.94%
18) UNA - 222/321 99 Missed - 69.15%
19) Norman - 215/321 106 Missed - 66.97%
20) Solo - 182/321 139 Missed - 56.69%
21) Proland - 73/321 248 Missed - 22.74%
22) Sophos - 50/321 271 Missed - 15.57%
23) Hauri - 49/321 272 Missed - 15.26%
24) CAT Quickheal - 21/321 300 Missed - 6%
25) Ikarus - Crashed on first virus. - 0%
Interesting also to note, is the detection level of the US AVK version with KAV+RAV engines was higher than the German version with KAV+BitDefender engines. Several vendors have free versions of their for purchase AV's, we didn't test the free versions, as it would serve no purpose for this test, but based on the results, none of the free versions would have been very impressive anyway. The term "Heuristics" seems like it should be taken very liberally, as some products that claim to be loaded with Heuristics scored miserably on items they clearly didn't have definitions for. Scanning speed was not measured, as it was totally irrelevant to my testing, and on-access scanners were not tested, as it would have been too time consuming, but considering most products have similar on-access engines as on-demand, and use the same database, results most likely, would be very similar.
Cut through the hype, cut through the marketing schemes, this was a real test, with real samples, and none of these samples were provided to the antivirus software vendors in advance. This is real world, and these are likely badguys you'll encounter, since I got them in my real encounters, and all were aquired on the internet in daily activities which anyone out there might be involved in. (Installing shareware, filesharing, surfing, etc). Keep in mind that with ITW tests the AV vendors have full disclosure of what they will be tested on in advance, not so here, so heuristics and real detection algorithms will play a big part, as well as the depth and scope of their definition database.
Honestly, I was *HOPING* to be surprised by a ton of things in this test, and really all I did was re-enforce many of the other testing sites on their results, mine are very close to theres, which actually shocked me, because i'm sure my samples aren't the same. This tells me overall, I think this might be a great guage of these products.
Also, I wanted to test the multi-engined products against the others, since most testers seem to not like testing them. Strong showings by F-Secure, and the AVK' brothers proved this idea works, and works incredibly well. The strenght of the KAV engine cannot be denied as well, since all but one of the top 5 products use the KAV engine. I forgot to add, one product I tested was called V-Catch, and turned out to be a trojan downloader and spyware application masking as a AV product.. LOL! Thankfully it was the last product I tested, and I just reformatted, I think it downloaded 30 trojans to my system. 8-)
I did NOT test any Dos viruses, as this is completely retarded to test these in a windows based environment, it tells us nothing. I cannot understand why Clementi bothers to test them, all they do is skew his test results badly. For example on his test, NOD32 scored 95.51%, but without DOS or other OS samples, NOD32 scored only 87.71%. Which amazingly enough, is within 1% variance of *MY* results. So i'm oblivious as to why he skews his own results for no real purpose? Who the hell cares what a product scores on DOS?!?