PHP 要判斷網頁是否存在, 簡單的方法就是 fopen / file_get_contents .. 等等, 有一堆的方式可以做, 不過這些方式都會把整頁 HTML 拉回來, 要判斷的網址資料很多時, 就會有點慢.(簡單說, 就是 PHP 如何快速 Check Url 是否存在?)
要判斷可以由 HTTP HEADER 來判斷, 就不用把整頁的內容都抓回來(詳可見: Hypertext Transfer Protocol -- HTTP/1.1).
fsockopen 判斷 HTTP Header
簡單的範例如下(轉載自: PHP Server Side Scripting - Checking if page exists)
if ($sock = fsockopen('something.net', 80))
{
fputs($sock, "HEAD /something.html HTTP/1.0\r\n\r\n");
while(!feof($sock)) {
echo fgets($sock);
}
}
會得到下述資料:
HTTP/1.1 200 OK
Date: Mon, 06 Oct 2008 15:45:27 GMT
Server: Apache/2.2.9
X-Powered-By: PHP/5.2.6-4
Set-Cookie: PHPSESSID=4e037868a4619d6b4d8c52d0d5c59035; path=/
Expires: Thu, 19 Nov 1981 08:52:00 GMT
Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0
Pragma: no-cache
Vary: Accept-Encoding
Connection: close
Content-Type: text/html
但是上述做法, 還是會有很多問題, 例如 302 redirect 等等, 簡單點的方法, 還是靠 curl 來幫我們處理掉這些麻煩事吧~
感謝 Alen 提醒, 要抓到上述資訊, 還有 PHP 的 get_headers() 可以用.
PHP + Curl + Content-Type 判斷
PHP + Curl 判斷此網頁是否存在, 詳可見: How To Check If Page Exists With CURL | W-Shadow.com
此程式會判斷 200 OK 等狀態資訊(200 ~ 400 間都是正常的狀態).
基本上, 上述那程式已經夠用, 不過使用者輸入的資料是千奇百怪的, 所以需要加上其它的判斷, 下述是隨便抓幾個有問題的網址:
- xxx@ooo.com # Email
- http://xxx.ooo.com/abc.zip # 壓縮檔
- <script>alert('x')</script> # 幫你檢查是否有 XSS 漏洞 =.=|||
因為上述資料, 所以要把上述資訊 Filter 掉, 所以要多檢查是否是正常網址, 和 Content-Type 是否是我們要的.
於是程式修改如下(修改自: How To Check If Page Exists With CURL):
<?php
function page_exists($url)
{
$parts = parse_url($url);
if (!$parts) {
return false; /* the URL was seriously wrong */
}
if (isset($parts['user'])) {
return false; /* user@gmail.com */
}
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, $url);
/* set the user agent - might help, doesn't hurt */
//curl_setopt($ch, CURLOPT_USERAGENT, 'Mozilla/4.0 (compatible; MSIE 5.01; Windows NT 5.0)');
curl_setopt($ch, CURLOPT_USERAGENT, 'Mozilla/5.0 (compatible; wowTreebot/1.0; +http://wowtree.com)');
curl_setopt($ch, CURLOPT_RETURNTRANSFER,1);/* try to follow redirects */
curl_setopt($ch, CURLOPT_FOLLOWLOCATION, 1);/* timeout after the specified number of seconds. assuming that this script runs
on a server, 20 seconds should be plenty of time to verify a valid URL. */
curl_setopt($ch, CURLOPT_CONNECTTIMEOUT, 15);
curl_setopt($ch, CURLOPT_TIMEOUT, 20);/* don't download the page, just the header (much faster in this case) */
curl_setopt($ch, CURLOPT_NOBODY, true);
curl_setopt($ch, CURLOPT_HEADER, true);/* handle HTTPS links */
if ($parts['scheme'] == 'https') {
curl_setopt($ch, CURLOPT_SSL_VERIFYHOST, 1);
curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, false);
}$response = curl_exec($ch);
curl_close($ch);/* allow content-type list */
$content_type = false;
if (preg_match('/Content-Type: (.+\/.+?)/i', $response, $matches)) {
switch ($matches[1])
{
case 'application/atom+xml':
case 'application/rdf+xml':
//case 'application/x-sh':
case 'application/xhtml+xml':
case 'application/xml':
case 'application/xml-dtd':
case 'application/xml-external-parsed-entity':
//case 'application/pdf':
//case 'application/x-shockwave-flash':
$content_type = true;
break;
}if (!$content_type && (preg_match('/text\/.*/', $matches[1]) || preg_match('/image\/.*/', $matches[1]))) {
$content_type = true;
}
}if (!$content_type) {
return false;
}/* get the status code from HTTP headers */
if (preg_match('/HTTP\/1\.\d+\s+(\d+)/', $response, $matches)) {
$code = intval($matches[1]);
} else {
return false;
}/* see if code indicates success */
return (($code >= 200) && ($code < 400));
}
// Test & 使用方法:
// var_dump(page_exists('http://tw.yahoo.com'));
?>
Content-Type information
上述 Content-Type 的資訊可由下述找到:
- /etc/mime.types
- /usr/share/doc/apache-common/examples/mime.types.gz
- /usr/share/doc/apache2.2-common/examples/apache2/mime.types.gz # 建議是看這個