Grumpy

+Clients
  • Content count

    35
  • Joined

  • Last visited


About Grumpy

  • Rank
    core_member_rank_12

Grumpy's Activity

  1. Grumpy added a post in a topic: Fetching templates uses a large amount of mysql resource   

    I expect this not to be an issue for significant majority of the users but in goal of increasing performance & multi-server optimizations I am adding this feedback.
    Currently CCS fetches templates from the database on each page load despite any caching setup. I would suggest that if apc/xcache/etc which has local cache system, it should use them for templates as well on a very short timeouts.
    If the site is setup on multi servers, fetching ccs templates alone from the sql server can easily exceed 1gbps. And mysql will take up a lot of sockets writing to net.
     
    For example:
    /admin/applications_addon/ips/ccs/sources/pages.php Line ~182 and ~450
    $skinFile    = $this->DB->buildAndFetch( array( 'select' => 'cache_content', 'from' => 'ccs_template_cache', 'where' => "cache_type='full'" ) ); could be altered to: (semi pseudo-code here)
    if ($cache) {   $skinFile = $cache } else { $skinFile    = $this->DB->buildAndFetch( array( 'select' => 'cache_content', 'from' => 'ccs_template_cache', 'where' => "cache_type='full'" ) ); }  
  2. Grumpy added a post in a topic: Something is causing runaway PHP processes and I cant find what   

    Though quite late to respond, I'd advise against raising process limit without knowing what system/resource we have. If you have runaway PHPs, having MORE runaway PHPs only increases the problem. Also comparatively speaking, even on my beefy E5-1650v2's I don't need more than 50 processes to max out the CPU.
    You can decrease max_execution time a lot. It partly depends on what you think & feel is acceptable. If you think ALL pages on site should load under 3seconds and average is like ~100ms, you can even put 3 there. But will likely fail to serve ~1 in 1000 pages or so of normal pages just due to randomness (no calculation here, just rough estimations). You can have higher limits to make it safer and crash less. It's a balance between killing runaways early vs decreasing failures.
  3. Grumpy added a post in a topic: Something is causing runaway PHP processes and I cant find what   

    Can you post some more stats? 'top' for starters. See sticky.
    You probably peak at 51 processes because your setting probably has a cap of 50 processes. (plus 1 for controller)
    What are you running? apache? su_php? nginx? php-fpm? version?
    Do you have modules/hooks/etc installed? Have you tried disabling them?
    Also, don't setup cron to kill php. There's a setting in php.ini where you can set max_execution time. 
  4. Grumpy added a post in a topic: Yet another critical SSL Security Flaw... "POODLE"   

    lol my windows update had 24 updates because of this. xD (at least guessing bc of poodle)
  5. Grumpy added a post in a topic: [Eek. Help, please!] Would someone like to help me with my limited server?   

    Bit of a gap between works and supported. But It does seem xcache had updates recently to add 5.5/5.6 support. Even php 5.4 is still tagged as beta for apc. I wouldn't use it... APC dev is dead.
  6. Grumpy added a post in a topic: [Eek. Help, please!] Would someone like to help me with my limited server?   

    Well... you can simply choose to live with 400ms. It's not that bad. Unless it feels really slow to browse. Which may mean it's not your forum processing issue, but some other elements within the page. 500~600ms being added by network is a significant one. I'm not sure where you are hosted and where the previous 1s was being measured from. But if you want to reduce network delay, you'll basically have to move out of current provider since I'm guessing there's no premium bandwidth option for you, quite rare for a vps.
     
    Since network is being a big difference, I'd suggest adding gzip compression (or checking that it is enabled). Also, make sure browser based caching is set for images/js/css. That sort of stuff so that further browsing will help a lot. They should make the greatest difference for you given majority of the delay is caused by the connection.
     
    With fancy setup, down to like 50ms is possible. That'll be a question of how much more effort you want to invest in it. General optimization of opcache, user cache is highly recommended.
     
    Since you have 5.5, only supported opcache option is zend opcache. Which... I may be wrong, but still unsupported by cpanel. If so, you have to manually install zend opcache and stop using easy apache.
     
    For user cache, to simplify, I recommend using memcache. You'll need to install memcache, the php module, from easyapache (or manually) into your php. And then you'll need to install memcached, the service, from yum. Make sure you don't confuse memcache vs memcache d . Afterwards, configure it in your IPB. Config info for memcached with IPB is found here:
    http://community.invisionpower.com/resources/documentation/index.html/_/tutorials/large-communities/using-alternate-cache-storage-r169
    You also should configure memcached (the service). If you installed via yum, it should be in /etc/sysconfig/memcached though default may be good enough.
  7. Grumpy added a post in a topic: [Eek. Help, please!] Would someone like to help me with my limited server?   

    Something doesn't seem right... You can tweak mysql more, but we need to find the big change, not the small changes. 900ms for single page load is too high. And it's not something minor mysql tweaking is going to solve. Your last top had no real issues, but your top right now is showing higher %wa than %user which is alarming. That means your processor waits around for io more than it actually works even though it has stuff to do. I'm pondering if it's a out-of-the-norm result, or the opposite.
     
    We may also want to make sure what you're measuring as response time is accurate first. Removing the network involved. Fancy load tester aside, try running
     

    time wget http://yoursite.com/forum/directory/index -O /dev/null  
    Run that like 10 times. Space them apart as well. Just post the real time stats, don't need to know the wget details.
  8. Grumpy added a post in a topic: [Eek. Help, please!] Would someone like to help me with my limited server?   

    dang. Didn't see that you were on mysql 5.6... So, forgot to also tell you to set:
    query_cache_type = 1
     
    Tidbit: from 5.6, query cache is disabled by default, so even setting query size alone doesn't enable query cache, must also enable by setting type (default of 1 in <=5.5 and default of 0 on >= 5.6).
     
    ----------------
     
    open_files_limit 
    If this is not mentioned anywhere in your mysql config, it's fine. If it's set to 0, it's fine. 0 / default is auto.
    If it is manually set to 4161, please set to 0 or 50000.
     
    ------------------------------
     
    table_open_cache 
    Mysql tuner will keep asking for more. but due to how IPB works, raising won't help. Just a side note.
     
    ------------------------
     
    [!!] Joins performed without indexes: 4532
    Did you install any addons / modules / etc to your site that might cause this? It basically means it's pulling data inefficiently. And to correct it, you must either stop requesting that, adjust the request so that it hits the indexes or add indexes on what is applicable. Its very existence isn't an issue, but if it becomes a sizeable chunk, it'll become a big issue. As far as I'm aware, default IPB yields near or exactly zero in this. Currently ~0.44% for you (4k / 907k).
  9. Grumpy added a post in a topic: [Eek. Help, please!] Would someone like to help me with my limited server?   

    in mysql config, try:
    query_cache_size = 64M join_buffer_size = 4M tmp_table_size = 32M max_heap_table_size = 32M add if not already present. change if already present in configs. /etc/my.cnf
    And restart mysql. Wait 24hrs, post tuner & top again.
     
    Also, try running ioping.
    https://code.google.com/p/ioping/
     
    Run:
    ioping . Hit Ctrl + C if you think you got enough samples. Please don't run other ioping features unless you know what you're doing.
  10. Grumpy added a post in a topic: Adding a close or hide (x) to a message box   

    Using different cookies names are good. Another way is to set cookie values. Like for week 1, if cookie value is 1, don't show anymore. If cookie value is 2 for week 2, don't show anymore, etc. This way you only use up one cookie. But it's not a big difference anyway.
  11. Grumpy added a post in a topic: Moving from Shared to VPS, help   

    To put it in a bad way, old and outdated.
    To put it in a good way, tried and tested.
     
    Enterprise solutions are always old stuff because they want stuff to be running without a restart for years. Stuff that has all the bugs rooted out for years. Ubuntu is like "let's try all the new shiny things!" so they have all the new stuff, but at the same time, won't provide the same level of stability. Ubuntu's primary target market is desktop users, and it really is best linux os for desktop. The Centos/RHEL's bleeding edge is Fedora, which works like centos/rhel but has the new stuff. And Centos/RHEL's primary target market is servers.
  12. Grumpy added a post in a topic: Moving from Shared to VPS, help   

    Unless you picked ubuntu for a specific reason, I suggest you get the vps reinstalled with centos 6 (or 7. 7 just came out, not many guides out there yet for it). It'll just be more relevant for hosting purposes, and thus bigger community on getting the right assistance. Also, control panel like cpanel becomes an option if you choose to.
     
    You can also post to:  http://community.invisionpower.com/resources/projects for hiring people. (Link at very top, quite invisible, yes)
  13. Grumpy added a post in a topic: how to measure performance of IPS board (numebr of users) before going to Production?   

    First, an answer to which you seek. But be sure to read the end too...
     
    There is no certain way to know, since lot of everything is based on assumptions.
     
    You can use load testing tools like jmeter to test this. jmeter is fairly advanced though, and you may choose to do a simpler bench like apachebench (ab). With ab (after installing), you just write "ab -n 500 -c 50 http://yoursite.com/ ". Doing so will request that specific address 500 times with 50 concurrent requests. This simulates 50 users hitting some page simultaneously (defined within response period) for 500 times. This is a lot more than you are likely to get. Because you need to think that 50 real people don't hit pages at the very same time. If a processing takes 0.1 seconds and each user looks at a single page for 60 seconds each, that means to fill up a constant 50 requests, you need 6000 people. As you can see, how "simultaneous" is defined makes a huge difference. IPS's definition of simultaneous is usually unique ip over 15minutes. Like here .
     
    Of course, getting the math right is more complicated since the processing time would be a function of requests. Also, this does not test for your all other files, just that ONE page which a real user would create.
     
    So really, just think of it as a very very rough idea of the load. And I suppose interpreting the result is a question of it's own.
     
    Now, having said that...
     
    DO NOT RUN ANY HEAVY BENCHES ON SHARED HOSTING. DO NOT RUN ANY HARDWARE TESTING BENCHES ON VPS/CLOUD.
    Those things are shared resource. You hogging the resource to see the maximum performance means you drag everyone else down with you. Every site, application, etc. hosted on that physical server will slow down because of you. It's also a quick way to get thrown out of your server for ToS violation.
     
    So, rather than testing what is the maximum capacity, think of what is reasonable to run, and see if you can run that.
     
    Benches are also a bit meaningless on any sort of shared hardware because it fluctuates a lot. Enough to make it pointless. You could be pushing out 1000 pages/s right now, but it doesn't mean you can do that tomorrow. Some other person might hog the resources and you may not even get 1pg/second. Getting a good host also largely means you get a host that manages these resources well so that there's no hogging by one/few user.
  14. Grumpy added a post in a topic: Can a new board operate on shared hosting? what are user limits ?   

    Errmm... Guess I'm replying to all your threads.
     
    1) Don't know. Can't know.
    2) When do you want vps? When you feel comfortable in managing a vps. Really, low end vpses aren't even more expensive than shared hosting. And almost always more resources. 
    When you need vps? When you feel that your current host is inadequate. There's no specific number I can give you, or anyone.
     
    Though I'm guessing you already bought the license, but personally, if you were to go to shared route, you'll be better off with IPS hosted. Just seems to make more sense cost-wise. I haven't tried IPS hosted solution myself, but since these servers are for IPS hosting only, they'll be better optimized to serve IPS needs better than generic hosts too. Though, prices kind of rise fast.
  15. Grumpy added a post in a topic: Leverage browser caching   

    You can omit the <directory> tags if you are putting it in .htaccess since the htaccess itself should be placed in appropriate directory. Otherwise, you can put it in your apache config.