Paulo Freitas

  • Content count

  • Joined

  • Last visited


About Paulo Freitas

  • Rank
    Advanced Member
  • Birthday 11/25/1987

Profile Information

  • Gender Male
  • Location Campinas, São Paulo, Brazil
  1. SEO: Improving robots.txt

    Could any dev take a look into this? I think it's another SEO improvement we can get, a time it's causing SERPs with duplicated content (which isn't good). I think this issue is a good one to pick for 3.1.2, you're taking SEO seriously in the last releases. :-) Cheers,
  2. FYI: I had a problem changing my account e-mail. There was an error, but it worked anyway.

  3. SEO: Improving robots.txt

    I'm still very busy here but what I can say for now is that there is an quite strange behavior in user profile pages, as you could see here:""&filter=0 I don't know where they're coming out, but the results with "page__f__" seems a bit buggy and probably dangerous for SEO. Can you take a look on it? :) I'm running against time to have more time to analyze these things, I think we still can optimize a lot more. :D Regards,
  4. Brazil FTW! ;D #bra #worldcup

  5. IP.Nexus Dev Update: Credits

    The last image is wrong. :P
  6. @felipensp Thank you for the new Function Array Dereferencing (FAD) friend, it's awesome! :) #php

  7. You are the most awesome hacker I know!

  8. You are the most awesome hacker I know!

  9. You are the most awesome hacker I know!

  10. SEO: Improving robots.txt

    Not, but I know it has very negative impact in not doing this. It seems that IP.Board 3.1 will help this, but the "fix" is not immediate - crawlers are quite slow to remove pages from their index. Regards,
  11. SEO: Improving robots.txt

    I didn't know precisely, but I think that even returning 403 errors crawlers will waste your traffic unnecessarily. Blocking URLs in robots.txt avoids compliant crawlers to follow these addresses. I can imagine how much this would cost to huge traffic boards. :ermm: Just my POV. :) Regards,
  12. SEO: Improving robots.txt

    Hi there, I want to start here a discussion in how we can optimize our default robots.txt file to get it updated in future IP.Board versions. :) It's simple: share your experiences. Use services like Google Webmaster Tools, Bing Webmaster Center and Yahoo! Site Explorer to identify wich pages is getting duplicated or is throwing errors/problems in these the crawlers. From what I already detected, (if I'm not wrong) we can block 5 more URLs: Disallow: /*?s= Disallow: /*&s= Disallow: /index.php?app=core&module=global&section=login&do=deleteCookies Disallow: /index.php?app=forums&module=extras&section=rating Disallow: /index.php?app=forums&module=forums&section=markasread I still haven't put these lines in my own robots.txt but I tested them in Google Webmaster Tools (GMT) and I'm convicted that will have positive impact to 1) reduce useless indexed pages, 2) reduce duplicated content and 3) reduce HTML suggestions from GMT. For the record: Crawlers hates this: All these duplicated and useless pages have negative impact to our rank in rigorous crawlers (like Google). We need to block them to reduce our penalty. :( I'm still analysing my GMT reports and I'll update here with all new useless URLs I find. But I want more people involved to share your knowledge. :) Sorry if my english is not perfect, I still need to dedicate more time to learn it. :huh: Best regards,
  13. Pronto, criei contas no PayPal, PagSeguro e MoIP. Só falta ganhar dinheiro com elas. :P

  14. Acabo de criar minha conta no PagSeguro. Alguém querendo doar alguma quantia aí? :P

  15. Uau, acabo de ler no changelog do Fx 3.7a4: Linux builds are now built with -fomit-frame-pointer, improving page load times on average by 4%