LPT730 LAB #7
Nowadays there are a lot of different way to share files over the internet, and the most popular two are “Peer-to-peer using .torrent” and “magnet link”.
Torrent is actually getting more and more popular nowaday. It has newer files, better control and sharing ability. But the downside of torrent is, it needs a tracker to keep your connection up-to-date. Once your tracker is not working any more, you may have problem to look for seed and leecher. Even you are 99% done, if you can not find the last piece of your file, that file may take forever to finish.
The good thing about magnet link is all you have to do is download a client that support magnet link. Those applications will allow you to share a hold folders to the network when you’re downloading some files. And of course you have the right not to share your files, but it may limit your download rate. Another good thing about magnet is you can use the internal search function to search any thing you want.
As follow is the comparison between these two ways.
1. You will able to get fast connection, especially when the resource is updated lately and it’s a popular file.
2. It is so easy to find updated resources from .torrent files.
3. You can change a lot of different options in the application during downloading a torrent package, including priority, download and upload bandwith.
4. Other user will not able to see your share folder, since it only share the files that you’re downloading and the one that you marked to be shared.
1. If a torrent was stored on web for long time, the tracker may be expired already. It is almost impossible to find existing seed or leechers.
2. Even you can Google a lot of torrent files from the internet, torrent clients do not have search functions.
3. Compare to magnet links, you have to find torrent files yourself, instead of just click on a web page or search in the client application.
4. If there are no seed, you may not able to complete you download until someone in the tracker has the last piece of file that you’ll need.
1. They do not need a central authority to issue.
2. They are more user based, easy to use. All the system need is an application that support magnet links.
3. Most Magnet links application has a search function.
4. Your firewall or your internet supplier may have disable the port or limit the bandwidth for magnet port.
1. Slower speed.
2. Less control on speed and on the contend that is being downloaded
3. You may expose some sensitive files when you’re sharing a whole directory.
4. Files are not up-to-date
echo “Start to Backup Today’s files…”
# Check the backup directory exist or not. If not, make and grant permission for it
if [ ! -e ~/.backup ]
chmod 755 ~/.backup
# Create a file using comparing, and its modify time is changed to the current day, but 0 hour and 0 minite
touch -t `date +%C%y%m%d`0000 backupstamp
# Find the files creating or modifying in the current day, and backup them
find ~ -newer backupstamp -type f | xargs tar cvf ~/.backup/$(date +%y%m%d).tar 2>/dev/null
LPT730 LAB #3
Part 1 Phishing
Phishing is an e-mail fraud method in which the perpetrator sends out legitimate-looking email in an attempt to gather personal and financial information from recipients. Typically, the messages appear to come from well known and trustworthy Web sites. Web sites that are frequently spoofed by phishers include PayPal, eBay, MSN, Yahoo, BestBuy, and America Online. A phishing expedition, like the fishing expedition it’s named for, is a speculative venture: the phisher puts the lure hoping to fool at least a few of the prey that encounter the bait.
The damage caused by phishing ranges from denial of access to e-mail to substantial financial loss. This style of identity theft is becoming more popular, because of the readiness with which unsuspecting people often divulge personal information to phishers, including credit card numbers, social security numbers, and mothers’ maiden names. There are also fears that identity thieves can add such information to the knowledge they gain simply by accessing public records. Once this information is acquired, the phishers may use a person’s details to create fake accounts in a victim’s name. They can then ruin the victims’ credit, or even deny the victims access to their own accounts.
It is estimated that between May 2004 and May 2005, approximately 1.2 million computer users in the United States suffered losses caused by phishing, totaling approximately US$929 million. United States businesses lose an estimated US$2 billion per year as their clients become victims. In 2007 phishing attacks escalated. 3.6 million adults lost US $ 3.2 billion in the 12 months ending in August 2007. In the United Kingdom losses from web banking fraud—mostly from phishing—almost doubled to £23.2m in 2005, from £12.2m in 2004, while 1 in 20 computer users claimed to have lost out to phishing in 2005.
But how to stop the attach of phishing? Firstly, People can take steps to avoid phishing attempts by slightly modifying their browsing habits. When contacted about an account needing to be “verified” , it is a sensible precaution to contact the company from which the e-mail apparently originates to check that the e-mail is legitimate. Another popular approach to fighting phishing is to maintain a list of known phishing sites and to check websites against the list. Microsoft’s IE7 browser, Mozilla Firefox 2.0, and Opera all contain this type of anti-phishing measure.Firefox 2 uses Google anti-phishing software. Last but not least, people can use GPG to stop phishing.
Part 2 Robots exclusion standard
The robot exclusion standard, also known as the Robots Exclusion Protocol or robots.txt protocol, is a convention to prevent cooperating web spiders and other web robots from accessing all or part of a website which is otherwise publicly viewable. Robots are often used by search engines to categorize and archive web sites, or by webmasters to proofread source code. The standard complements Sitemaps, a robot inclusion standard for websites.
The protocol, however, is purely advisory. It relies on the cooperation of the web robot, so that marking an area of a site out of bounds with robots.txt does not guarantee privacy. Some web site administrators have tried to use the robots file to make private parts of a website invisible to the rest of the world, but the file is necessarily publicly available and its content is easily checked by anyone with a web browser.
There is no official standards body or RFC for the robots.txt protocol. It was created by consensus in June 1994 by members of the robots mailing list (firstname.lastname@example.org). The information specifying the parts that should not be accessed is specified in a file called robots.txt in the top-level directory of the website. The robots.txt patterns are matched by simple substring comparisons, so care should be taken to make sure that patterns matching directories have the final ‘/’ character appended, otherwise all files with names starting with that substring will match, rather than just those in the directory intended.
1. This example allows all robots to visit all files because the wildcard “*” specifies all robots:
2. This example keeps all robots out:
3. The next is an example that tells all crawlers not to enter four directories of a website:
4. Example that tells a specific crawler not to enter one specific directory:
User-agent: BadBot # replace the ‘BadBot’ with the actual user-agent of the bot
5. Example that tells all crawlers not to enter one specific file: