Portal Home > Knowledgebase > Articles Database > Need help [php.ini] Logging all php errors
Need help [php.ini] Logging all php errors
Posted by hbhb, 08-22-2008, 02:13 AM |
I need help on how to modify php.ini to log all php errors into a file name.
This is because one user's script is creating a lot of core.* files, and I want to find out the source & cause from the scripts.
1. How do I turn/enable error logging?
2. Which line should I define the log file name?
Thanks
|
Posted by RoseHosting, 08-22-2008, 03:02 AM |
Using a text editor, edit your 'php.ini' file and add the following PHP directives:
log_errors = on
error_log = /var/log/php/errors/php_error.log
|
Posted by LnxtecH, 08-22-2008, 05:18 AM |
As a side note, you may use gdb (GNU Debugger) to gather information from a core file.
http://www.google.co.in/search?hl=en...e+Search&meta=
|
Posted by hbhb, 08-22-2008, 06:54 PM |
gdb core.27000
GNU gdb Red Hat Linux (6.5-37.el5_2.2rh)
Copyright (C) 2006 Free Software Foundation, Inc.
GDB is free software, covered by the GNU General Public License, and you are
welcome to change it and/or distribute copies of it under particular conditions.
Type "show copying" to see the conditions.
There is absolutely no warranty for GDB. Type "show warranty" for details.
This GDB was configured as "i386-redhat-linux-gnu"..."/home/mts/public_html/forums/core.27000": not in executable format: File format not recognized
(gdb) quit
what is the correct way to debug to identify the source?
|
Posted by hbhb, 08-25-2008, 12:59 AM |
I need some tips how do I set core files to dumped at 0bytes.
The dump size has becoming a problem to me for now..
|
Posted by junitha, 08-25-2008, 01:48 AM |
You may set a limit to the coredumpsize in the follwoing file.
vi /etc/csh.cshrc
Check if the following entry exists:
limit coredumpsize 0
This will limit the size of the largest core dump that will be created to 0 bytes.
Regards,
Junitha
Systems Engineer
http://SupportPRO.com :: Transparent Web Hosting Support Services to Web Hosting Businesses
Last edited by junitha; 08-25-2008 at 01:53 AM.
|
Posted by hbhb, 08-25-2008, 04:49 AM |
Strange, it is already "limit coredumpsize 0" but the core dump files are still being created. How do I solve this?
|
Posted by junitha, 08-25-2008, 06:31 AM |
Please get back with the result of the following command:
ulimit -a
|
Posted by hbhb, 08-25-2008, 10:28 PM |
ulimit -a
core file size (blocks, -c) 1000000
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 73728
max locked memory (kbytes, -l) 32
max memory size (kbytes, -m) unlimited
open files (-n) 4096
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 14335
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
|
Posted by LnxtecH, 08-26-2008, 12:33 AM |
From the given ulimit result, your core size limit is 1000000
Use command "ulimit -c 0" to limit file size to 0.
|
Posted by junitha, 08-26-2008, 01:46 AM |
If you don't want core files at all, set "ulimit -c 0" in your startup files. That's the default on many systems; in /etc/profile you may find
ulimit -S -c 0 > /dev/null 2>&1
If you DO want core files, you need to reset that in your own .bash_profile.
For that you may give as :
ulimit -c 0
|
Posted by hbhb, 08-27-2008, 01:15 PM |
Is this where I should change (red colour)?
# head /etc/profile
#--------------------------------------------------------------------------------------------------
#cPanel Added Limit Protections -- BEGIN
#unlimit so we can run the whoami
ulimit -n 4096 -u 14335 -m unlimited -d unlimited -s 8192 -c 1000000 -v unlimited 2>/dev/null
LIMITUSER=$USER
if [ -e "/usr/bin/whoami" ]; then
LIMITUSER=`/usr/bin/whoami`
fi
if [ "$LIMITUSER" != "root" ]; then
|
Posted by LnxtecH, 08-28-2008, 12:26 AM |
Open /etc/profile using an editor like vi and search for "ulimit"
|
Posted by junitha, 08-28-2008, 02:55 AM |
First you hash the following line in /etc/profile.
ulimit -S -c 0 > /dev/null 2>&1
Then in the file /etc/security/limits.conf , add the following line:
root soft core 10000
as you can see below:
# -
#
#* soft core 0
#* hard rss 10000
#@student hard nproc 20
#@faculty soft nproc 20
#@faculty hard nproc 50
#ftp hard nproc 0
#@student - maxlogins 4
root soft core 10000
The value can be anything that you specify. You may find more explnation for these in the file itself.
|
Posted by hbhb, 08-28-2008, 03:36 PM |
First of all, I run the command "ulimit -c 0"
#ulimit -c 0
# ulimit -a
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 73728
max locked memory (kbytes, -l) 32
max memory size (kbytes, -m) unlimited
open files (-n) 4096
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 14335
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
I also hash the lines inside /etc/profile
I also added "root soft core 10000" into /etc/security/limits.conf
# tail /etc/security/limits.conf
#* soft core 0
#* hard rss 10000
#@student hard nproc 20
#@faculty soft nproc 20
#@faculty hard nproc 50
#ftp hard nproc 0
#@student - maxlogins 4
root soft core 10000
# End of file
Now do I need to reboot the server or restart any services for this to take effect?
|
Posted by zacharooni, 08-28-2008, 07:49 PM |
This seems to have gotten off track, why don't we solve the problem here:
From command line, can you:
and force a core dump that has size?
Second, if you have, try:
|
Posted by zacharooni, 08-28-2008, 08:04 PM |
Also, see here for examining a PHP core dump in detail:
http://bugs.php.net/bugs-generating-backtrace.php
|
Posted by hbhb, 08-28-2008, 09:00 PM |
Hi,
I do not know which script is generating the core dumps. The script run is a Simple Machine Forum (SMF).
|
Posted by zacharooni, 08-28-2008, 09:06 PM |
Do this in the directory:
ls -laS core.* | head -10
|
Posted by hbhb, 08-29-2008, 01:55 AM |
After 2 hours checking, the core files is still being created. Please advise if I need to reboot to take effect. Thanks
|
Posted by junitha, 08-29-2008, 02:14 AM |
If you need to set core files to be dumped at 0bytes, then you will have to adjust the value accordingly. I just gave you an example wih the value 10000
ie, you will have to give the following line in the file /etc/security/limits.conf .
root soft core 0
After that you just restart your server. That will do!!!
|
Posted by hbhb, 08-29-2008, 02:30 AM |
Thanks. I've change the core value to 0inside /etc/security/limits.conf
# tail /etc/security/limits.conf
--------------------------------------------------------------
#* soft core 0
#* hard rss 10000
#@student hard nproc 20
#@faculty soft nproc 20
#@faculty hard nproc 50
#ftp hard nproc 0
#@student - maxlogins 4
root soft core 0
# End of file
|
Posted by LnxtecH, 08-29-2008, 05:19 AM |
By using ulimit -c, the size can be limited but core files will still be generated. You need to find out the exact reason and resolve.
|
Posted by hbhb, 09-03-2008, 03:28 AM |
the core files is still being created.
# ulimit -c
1000000
|
Add to Favourites Print this Article
Also Read