среда, 10 июля 2013 г.

php_svn.dll for x86 php 5.3

http://fragfrog.nl/blog/142/SVN%20integration%20for%20PHP%20with%20pvp_svn/

вторник, 9 июля 2013 г.

McEdit As Default Editor (CentOS/RedHat)

https://gist.github.com/inri13666/5957100

crontab && vps with CPanel installed && php script

for cron tasks use instead "php" the next "/usr/local/bin/php -f"

for example
*/2 * * * *  /usr/local/bin/php -f /{#PATH_TO_PHP_SCRIPT} >> /dev/null

среда, 3 июля 2013 г.

How to make multiple updates using a single query in MySQL

source : http://blog.bubble.ro/how-to-make-multiple-updates-using-a-single-query-in-mysql/


How to make multiple updates using a single query in MySQL


As you might know it’s quite easy to make multiple INSERTs in a single query, like this:
INSERT INTO mytable
(id, title)
VALUES
('1', 'Lord of the Rings'),
('2', 'Harry Potter');
However, for some strange reason you can’t do multiple changes to a table in a single UPDATE query like this:
UPDATE mytable
SET (title='Great Expectations' WHERE id='1'),
(title='War and Peace' WHERE id='2');


However, you can do a very interesting trick. You can combine an UPDATE with a CASE like this:
UPDATE mytable SET title = CASE
WHEN id = 1 THEN 'Great Expectations';
WHEN id = 2 THEN 'War and Peace';
ELSE title
END;
The ELSE title is very important, otherwise you will overwrite the rest of the table with NULL.
You might wonder why on earth you’d want to make multiple updates in a single query. Here’s an example that might not be so obvious:
Let’s say you want to make a page view counter for your shop, but you want to implement caching for your pages and running an SQL UPDATE query for each page is out of the question. An efficient solution would be to make a logfile with each view as a new line appended in a file.
Here’s an example of logfile, with each view logged on a single line using this format:IP | PRODUCT_ID
78.32.43.2|3
54.133.87.54|2
85.83.93.91|4
The append part is most important, since it’s the only file writing mode that can be used with multiple processes (remember that you have MANY viewers at the same time, and if only 2 of them try to write a file at the same time your file will be pretty much screwed – no, you don’t lose the file itself, you just lose the content, which is even more important).
An alternative would be to use file locking, but it’s really really inefficient when you have lots of visitors (digg effect anyone?). However, the append mode is optimized by the operating system and stores the content in a buffer, making sure the content is write properly.
So, the best solution would be to make a logfile and then process this file with a cron job. It’s very easy to do that, but the challenge lies in updating the database. Here’s where the multiple updates in a single query trick comes into play.
You can just create a long query to update the database and run it only once instead of hundreds of small queries (which in case you didn’t figure it out, would bring your database to its knees in many cases).
So we can make a script to parse our logfile like this:
$logfile = 'logfile.txt';

$views = array();
$ip_db = array();

$lines = file($logfile);

$cnt = count($lines);
for ($i=0; $i<$cnt; $i++)
{
 if (preg_match('/([0-9.]+)\|([0-9]+)/', $lines[$i], $regs))
 {
  $ip = $regs[1];
  $id = $regs[2];

  if (!isset($ip_db[$ip]))
  {
   $ip_db[$ip] = 1;
   $views[$id]++;
  }
 }
}

if (empty($views))
{
 exit;
}

$query = "UPDATE products SET views = CASE ";

$idlist = '';

reset($views);
while (list($id, $val) = each($views))
{
 $query .= " WHEN id = ".$id." THEN views + ".$val." ";
 $idlist .= $id.',';
}

$idlist = substr($idlist, 0, -1);

$query .= "
END
WHERE id IN (".$idlist.")";

// run $query
Simple and efficient. And did I mention it’s also free?
Download link: parse_logfile.php

Important update:

Just as Jesper pointed out using transactions instead of CASE can make a huge difference.
Here are the results of some tests made on a 316,878 records database using both MyISAM and InnoDB storage engine.
InnoDB
Rows updatedTransactions time
(in seconds)
CASE time
(in seconds)
4006s11s
100020s17s
30000(too long)(too long)
MyISAM
Rows updatedConsequent queries time
(in seconds)
CASE time
(in seconds)
4000s6s
10000s13s
3000010(too long)
As you can see the results are very interesting. You can clearly see the difference of both storage engine and transactions (at least for MyISAM). In other words, if you use MyISAM (and many people do), use transactions for multiple updates. If you use InnoDB switch to MyISAM and them use transactions.
The CASE method seems to be efficient only for few updates (so you can make a cron job to run updates more frequently), but overall you can see clearly that you can still use transactions and get better results.
I hope the method described in this post helped you or at least gave you more ideas on how to use it for any of your projects, and if you didn’t know about transactions, maybe you should give it more attention (I sure will from now on).

Second (and hopefully final) update:

After a bit of research I figured out MyISAM doesn’t support transactions (yes, silly me), so the tests above were done using simple consequent queries (modified up there as well). However, it seems MySQL does some internal optimizations and runs themvery efficiently. If you have a busy database it’s a good idea to do a LOCK TABLE query before the batch update though (and of course don’t forget to UNLOCK the table when done).
Also after a few suggestions I got to a much faster version (read the comments below – Thanks, guys).
So here’s the final CASE version:
UPDATE mytable SET title = CASE
WHEN id = 1 THEN ‘Great Expectations’
WHEN id = 2 THEN ‘War and Peace’
...
END
WHERE id IN (1,2,...)
Although it doesn’t beat the speed of normal queries run in a row, it can still get close enough.
Also, since many people asked why bother that much when you can just run the normal queries, you can consider this an experiment to find alternatives. Also, keep in mind that in order to gain advantage of MySQL optimizations all those UPDATE queries need to be run in a batch. Running each UPDATE query when the page is viewed for example is not the same thing (and that’s pretty much the problem I wanted to solve as efficiently as possible).
Also, as a final note, you should always run an OPTIMIZE TABLE query after all these updates.