login::
pass::
name::
id::
node:
26.03.2008-20:53:52
template:
4
parent:
macOS
owner:
superpussy
viewed by:
created:
26.03.2008 - 20:53:52
cwbe coordinatez
:
101
63540
63542
2109677
1857521
3830869
ABSOLUT
K
YBERIA
permissions
you:
r,
system:
public
net:
yes
⠪
neurons
stats
|
by_visit
|
by_K
source
tiamat
K
|
my_K
|
given_K
last
commanders
polls
total descendants::
total children::4
show[
2
|
3
]
flat
aky tool pouzit na defragmentaciu disku s mac os x?
alebo radsej reinstall?
title/content
title
content
user
00000101000635400006354202109677018575210383086903838374
sucho
29.03.2008 - 17:03:15
(modif: 29.03.2008 - 17:16:23), level: 1,
UP
NEW
!!CONTENT CHANGED!!
Re: 26.03.2008-20:53:52
doporucujem vsetkym precitat si o tejto problematike na wikipedii:
http://en.wikipedia.org/wiki/File_system_fragmentation
pre tych ktorym sa nechce citat vyberam odstavce, ktore tu uz boli nacrtnute a tiez dolezite veci ktore tu chybaju:
A relatively recent technique is delayed allocation in XFS and ZFS; the same technique is also called allocate-on-flush in reiser4 and ext4. This means that when the file system is being written to, file system blocks are reserved, but the locations of specific files are not laid down yet. Later, when the file system is forced to flush changes as a result of memory pressure or a transaction commit, the allocator will have much better knowledge of the files' characteristics. Most file systems with this approach try to flush files in a single directory contiguously. Assuming that multiple reads from a single directory are common, locality of reference is improved. Reiser4 also orders the layout of files according to the directory hash table, so that when files are being accessed in the natural file system order (as dictated by readdir), they are always read sequentially.
The HFS Plus file system transparently defragments files that are less than 20 MiB in size and are broken into 8 or more fragments, when the file is being opened.
Amiga SFS (Smart File System) defragments itself while the filesystem is in use. The defragmentation process is almost completely stateless (apart from the location it is working on), which means it can be stopped and started instantly. During defragmentation, data integrity is ensured for both meta data and normal data.
00000101000635400006354202109677018575210383086903832760
tomas
27.03.2008 - 13:10:21
, level: 1,
UP
NEW
Re: 26.03.2008-20:53:52
idefrag mam
nerobil som si testy,
ale ked tyzdenne na 120gb disku 2-3x popresuvam/pomazem 30gb tak ho radsej obcas pouzijem
inak si nechavam 13gb volneho stale inak systemu zacne mrrrtne j... ked robim fotky.
00000101000635400006354202109677018575210383086903831728
maniac
27.03.2008 - 07:37:04
, level: 1,
UP
NEW
Re: 26.03.2008-20:53:52
no, moj pocit je taky, ze filesystemy ako hfs, ext2, reiserfs, xfs (a asi aj ZFS) su pisane tak aby sa menej fragmentovali.
ALE videl som defragmentator ext2 tusim a naozaj tam nejaka fragmentacia vznika, hlavne ked je plny disk.
takze bud by som ostaval na max zaplneni disku s 10% free a tvaril sa na spokojneho mac usera, alebo nanovo naformatoval dany fs, alebo skusil gugeľ [posledne dve len ak mas intenzivne nutkanie nieco s tym robit] :)
0000010100063540000635420210967701857521038308690383172803832936
superpussy
27.03.2008 - 13:49:03
, level: 2,
UP
NEW
Re[2]: 26.03.2008-20:53:52
dostal som sa totiz do situacie, ze som potreboval z macu prepisat video vo full HD rozliseni a ten pocitac bol z toho totalne v pici,, na 2TB raid zaplneny cca do polovice som si nakopiroval 750GB movko a nebolo toho boha, aby to preslo na 1x cele,, vacsinou to stoplo na tom istom mieste a uplne nazmyselne, tak som si domyslel, ze daco ako defragmentacia by tomu pomohlo,, aj ked na apple stranke pre os x skor odporucaju reinstall, ale vzhladom na to, ze som s tym zabil niekolko dni, radsej by som do buducna pohladal nieco na defragmentaciu.
000001010006354000063542021096770185752103830869038317280383293603834058
maniac
27.03.2008 - 19:12:57
, level: 3,
UP
NEW
Re[3]: 26.03.2008-20:53:52
no, dost extremna situacia co do velkosti suboru pravdu povediac
ako mas robeny raid?
nie je mozne ze tam je badblock?
filesystem je HFS+ ?
00000101000635400006354202109677018575210383086903831728038329360383405803834771
superpussy
28.03.2008 - 13:00:21
(modif: 28.03.2008 - 14:17:47), level: 4,
UP
NEW
!!CONTENT CHANGED!!
Re[4]: 26.03.2008-20:53:52
je to sw raid a hfs+,,
badblock sa neda vylucit,
ale nepredpokladam..
00000101000635400006354202109677018575210383086903831457
forcer
27.03.2008 - 00:03:36
, level: 1,
UP
NEW
Re: 26.03.2008-20:53:52
hfs+ treba defragmentovat?
0000010100063540000635420210967701857521038308690383145703831566
Trilobite
27.03.2008 - 01:13:23
, level: 2,
UP
NEW
Re[2]: 26.03.2008-20:53:52
:))
tym chcel forcer povedat ze take veci pod macosx netreba :)
000001010006354000063542021096770185752103830869038314570383156603834109
acidmilk
27.03.2008 - 19:31:47
, level: 3,
UP
NEW
Re[3]: 26.03.2008-20:53:52
aj hfs+ sa fragmentuje ty jablkovy mudrlant .)
00000101000635400006354202109677018575210383086903831457038315660383410903834729
Hrivo
28.03.2008 - 12:51:35
(modif: 28.03.2008 - 14:50:00), level: 4,
UP
NEW
!!CONTENT CHANGED!!
Re[4]: 26.03.2008-20:53:52
su nejake filesystemy kde sa to prejavuje minimalne? respektive sa nefragmentuju?