Anonymous - 2016-12-31

Hello,

sorry for this question, its probably stupid. I am a newbie.
I want the crawler to visit all sites, download all contents and get them all to one array (?) so i can work with it. In Detail i want to extract the Mailadresses.

This think here worked fine when getting the content for a page with a usual PHP get all contents. But i cant get it work here:

$DocInfo->source = preg_match_all('#[a-z0-9-_]?[a-z0-9.-_]+[a-z0-9-_]?@[a-z.-]+.[a-z]{2,}#i', $inhalt, $subpattern);

$emails = array();
//
// Rückgabe durchlaufen
//
foreach($subpattern[0] as $email) {
$emails[] = $email;
}

$emails = array_unique($emails);

if (empty($emails)) {

    echo("Die Liste ist leer.\n");
} else {

    echo("<ul>\n");

    foreach ($emails as $val) {

        echo("<li>" . htmlentities($val, ENT_QUOTES, 'UTF-8') . "</li>\n");
    }
    echo("</ul>\n");
}

Thank you so much for your help in advance!